Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / weblog

Dominic Cronin's weblog

Getting IIS Express to run in a 64 bit process, and other fun Tridion content delivery configurations

Posted by Dominic Cronin at Jul 24, 2013 09:55 PM |

In the last couple of days, I've spent far more time than I'd like figuring out how to get a Tridion-based web application to run correctly under Visual Studio. There are three basic choices:

  1. Run it directly using Visual Studio
  2. Run it using IIS Express
  3. Run it using IIS (non-Express version)

As the application is intended to run on a 64 bit architecture, there are some challenges. Visual Studio runs in 32 bit mode, so the first option is out. Using full-on IIS is an attractive thought; you can manually configure the application pool to run in 64 bit mode. Unfortunately, getting a debug session up and running takes more configuration than that. You have to set up the web site correctly, and it was just too fiddly. I ran out of time, or steam or whatever. (Somebody will probably tell me it's easy, and I dare say it is when you know how, and aren't spending time you really should be spending on something else. Any hints are always welcome.)

Of course, with a Tridion site, half the game is making sure you have the correct DLLs in place for the processor architecture you are using. Along the way, I discovered that the quick and dirty way to tell if you have a 32 or 64 bit version of xmogrt.dll (Juggernet's "native" layer) is the size. The 64 bit version comes in at 1600KB and the 32 bit version is about half that at 800KB or so. This varies from version to version, so on a 2013 system, it's 1200ish/900ish KB, but once you get the hang of it, you can tell them apart at sight, which is pretty useful.  The other DLLs are also important, although as far as I can tell, only Tridion.ContentDelivery.AmbientData.dll is hard-compiled for 64 bit architecture, at least on the 2011 system I was working on. The rest of the .NET assemblies are compiled to MSIL, which of course, will run on either architecture.

But I digress. The thing I wanted to blog (and this will definitely be tagged note-to-self) was how to get IIS Express to run in 64 bit mode. By default it runs on 32 bits, but if you follow this link:

http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/3254745-allow-for-iis-express-64-bit-to-run-from-visual-st

... you will find the following nugget of goodness:

You can configure Visual Studio 2012 to use IIS Express 64-bit by setting the following registry key:

reg add HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\WebProjects /v Use64BitIISExpress /t REG_DWORD /d 1

However, this feature is not supported and has not been fully tested by Microsoft. Improved support for IIS Express 64-bit is under consideration for the next release of Visual Studio.

Very handy indeed. Running under IIS express is just one click of the button. Just works.

And by way of a PS. (Post Script that is, not PowerShell) here's how you find the processor architecture of a DLL (This time on my 2013 image.)

PS C:\inetpub\www.visitorsweb.local\bin> [reflection.assemblyname]::GetAssemblyName((resolve-path '.\Tridion.ContentDelivery.AmbientData.dll')).ProcessorArchitecture
MSIL

Well anyway - it's no fun scratching your head over stuff like this. Maybe this helps.

Understanding the eight Powershell profiles (or Nobody expects the Spanish Inquisition)

Posted by Dominic Cronin at Jun 23, 2013 09:55 PM |
Filed under:

A while ago, I started using the Windows Powershell ISE, and immediately fell foul of the fact that the scripts I was debugging didn't work because their environment was set up in my $profile. It turns out that the ISE has a separate profile from the one used in the PowerShell proper. It's easy enough to deal with once you know about this, although my first attempt of at working round the problem would have been better if I'm known how easy it is to get the names of the different profile locations. At that time, things would have gone better if I'd have first read this article by Ed Wilson on the Scripting Guy blog: Understanding the Six Powershell Profiles.

In this article, he explains that there are four profiles which are loaded by the PowerShell on startup. You can get the names of them by looking at the $profile variable. The commonest use of this is to do something like:

notepad $profile

to edit your profile. Well as you already know by this point, there are more. You can see these with a little more typing, as follows:

$profile.CurrentUserAllHosts
$profile.CurrentUserCurrentHost
$profile.AllUsersCurrentHost
$profile.AllUsersAllHosts

What you get when you type $profile is the same as $profile.CurrentUserCurrentHost.

So that's four. You can get to quite easily by understanding that the ISE and the PowerShell are different hosts, so are entitled to specify their own locations for the CurrentHost versions. Of course, having got that far, we can immediately suppose that any application that hosts the PowerShell might also have it's own CurrentHost profiles, so in theory it's limitless. Whatever... I don't have any beef with Ed for saying there are six - he was describing the general case, and doing a good job of helping people avoid a slightly non-obvious pitfall. (Like I said - I wish I'd read his article earlier than I did.)

So there you have it. We can say eight, ten, twelve... whatever.. so what's the point of this blog post? Well it turns out that I managed to uncover a different non-obvious pitfall. Drove me crazy for a while, I can tell you!

I quite often use Vim for text editing, and I thought it would be cool to get it wired up to work with PowerShell. I already had PowerShell syntax highlighting set up, but I wanted to be able to open up files and edit them from within the shell. There's plenty of material on the Internet about how to do this, and I set to with a will. Not too hard - set up a couple of aliases and so forth. But then I found that when I opened up files directly in the shell (instead of spawning the a gVim window) the syntax highlighting combined with the background color of the shell was just awful. If I am in an elevated shell, I have this set to DarkRed, and the code to do this is in my profile. So I went to disable it. Looked in my profile. The colour changing code wasn't there. Then I remembered the other profiles - and that I'd put this code into one of the AllUsers profiles, or at least I thought I had - because it wasn't there either. Well it was definitely executing. If I started the powershell with -noprofile, I got a dark blue background, and without -noprofile, I got red. Hmm... Then I edited all four profiles so that each would emit it's name when executed. Nothing doing! The AllUsers profiles weren't even executing.

Eventually I poked at it some more, and realised that if I used notepad to edit the AllUsers profiles, I could see my colour changing code, but from vi I saw my "This is the AllUsersCurrentHost profile" line. Bingo! You've probably guessed by now. It was a 64bit server, and as the AllUsers profiles are down in C:\Windows\System32, they are mirrored in C:\Windows\SysWOW64\. It therefore depends on whether you are using a 32 bit or 64 bit editor.

So according to Ed's counting system, the total is now eight!

Fun with tag clouds

Posted by Dominic Cronin at Apr 21, 2013 10:07 PM |

Alvin Reyes recently posted about Tridion bloggers, and included a bunch of tag clouds based on the blogs of various folks at SDL. I didn't want to miss out on the fun, so I've followed his link to wordle and created my own. For some reason, it's strangely satisfying. I'm quite pleased with it. :-)

Tag Cloud

Pacman

Posted by Dominic Cronin at Apr 17, 2013 11:55 PM |

This is mostly by way of a "note to self". I've recently started working at a customer where connecting my computer to their network is not just allowed, but necessary. Once connected, if I want to use the Internet, I have to go through their filtering proxy - presumably to keep the badness of the Internet from their systems (and yes, they do pay a lot of attention to ensuring the machine is virus-free). Previously, when I worked there for a day or two, setting up the proxy was a minor irritation, but as I'm going to be there rather longer, the idea of reconfiguring my networking twice a day started to look pretty unattractive. My first attempt at solving this had been to have a couple of scripts that set up the proxy by making the relevant registry settings, but unfortunately, Windows doesn't pick these up immediately. Yeah - sure - if I could remember to run the scripts before shutting down it might work, but I'm not that obsessive.Or I could get Windows to pick up the settings by opening the various screens... Internet Options... Connections.... LAN Settings... oh wait... there had to be a better way.

It turns out that there's something called a Proxy Auto-configuration file. If you select "Automatically detect settings", then Windows will try and locate one of these on the network using the Web Proxy Auto-discovery Protocol, however the customer in question doesn't do this. My needs were simple enough, though, so I checked the next box down: the one that says "Use automatic configuration script". All that remained was to create the script.

It turns out that you write such things in JavaScript, and it's simply a matter of writing a function which is named in the PAC standard, and using other functions that are made available. Here's what I ended up with (although I'll probably add refinements):

function FindProxyForURL(url, host) {
	var customerProxy = "PROXY 10.62.40.42:1234";

	if (atCustomer()){
		if(dnsDomainIs(host, ".internal.customer.com") || dnsDomainIs(host, "localhost")|| dnsDomainIs(host,".local")){
			return "DIRECT";
		}
		else return customerProxy;
	} else {
		return "DIRECT";
	}
	
}

function atCustomer(){
	return isResolvable("server.not.on.external.dns");
	// or maybe
	// return isInNet(myIpAddress(), "10.62.0.0", "255.255.0.0"); 
}

Nothing fancy, but it works. I suspect I'll find a few edge cases where I maybe have to enhance the script or even configure things by hand, but for now I have the satisfaction of knowing I can just turn up, plug in, and start work.

Dumping publication properties to a spreadsheet - a Powershell one-liner

Posted by Dominic Cronin at Apr 15, 2013 10:50 PM |

A colleague mentioned to me today that he'd solved a problem with Tridion publishing that had been caused by the publication path being incorrectly set on some publications. We talked about how useful it would be to be able to get a summary of the paths without having to open every publication. Time for a powershell one-liner! How about this?

$core.GetSystemWideList((new-object PublicationsFilterData)) | select-object -property Title,MultimediaPath,MultimediaURL,PublicationPath| Export-Csv -path c:\pubs.csv

Those of you who have been following along will notice that I have imported the core service namespace using the reflection module, but even on a bare Tridion system, you could type this easily enough. The CSV file can be opened up directly in Excel, and Bob's your uncle.

A Tridion tree-walk in Powershell

Posted by Dominic Cronin at Apr 08, 2013 10:30 PM |
Filed under: , ,

Now that I've got some reasonably terse syntax working for Tridion scripting, it's time to start building out some tooling to make the whole thing useful. It's quite often useful to be able to enumerate everything in your Tridion system, so walking the tree is a basic operation. You don't want to write the tree walk every time you have a different operation to perform, so it's handy to abstract the mechanics of the recursion out into a function. Somewhere in the nether regions of this blog, you'll find a JavaScript implementation of such a function. The basic technique I used in JavaScript was to have my tree-walking function accept a "process" function as an argument. For each item in your system, this is invoked, and is able to perform whatever processing is necessary on your item. (In the JavaScript version, I actually had two functions: process and filter. The filter function was responsible for deciding whether the item was interesting to process. In practice, this is probably too much abstraction. You can just as easily code an if-block in your process function, so on this occasion I'm restricting myself to just the one.)

To anyone who has written any JavaScript, it's pretty much impossible to miss the fact that functions are first-class objects. It may not be immediately apparent that this is true in Powershell, but it is. A Script Block in Powershell, is simply an anonymous function, and you can pass them around in variables or as parameters to other functions. (These days, the concept isn't even weird to C# hackers, what with lambda expressions and all.)

So - here goes: if you start with the function "recurseTridionItems" shown below....

import-module Reflection
import-namespace Tridion.ContentManager.CoreService.Client

function recurseTridionItems{
	Param(
	[parameter(Mandatory=$true)]
	[ValidateNotNullOrEmpty()]
	[SessionAwareCoreServiceClient]$core, 
	[IdentifiableObjectData]$parent, 
	[ScriptBlock]$scriptblock,
	[int]$level = 0
	)
	$ro = new-object ReadOptions
	if ($parent -eq $null){
		[PublicationData[]]$items = @($core.GetSystemWideList((new-object PublicationsFilterData)))
		foreach ($item in $items) {
			$fullItem = $core.Read($item.Id, $ro)
			&$Scriptblock $fullItem $level
			recurseTridionItems $core $fullItem $scriptblock ($level + 1)
		}
	}
	else {
		if ($parent -is [OrganizationalItemData]){
			$items = $core.GetList($parent.Id, (new-object OrganizationalItemItemsFilterData))
		} else {
			$items = $core.GetList($parent.Id, (new-object RepositoryItemsFilterData))
		}

		foreach($item in $items) {
			$fullItem = $core.Read($item.Id, $ro)
			&$Scriptblock $fullItem $level
			if ($fullItem -is [PublicationData]) {
				recurseTridionItems $core $fullItem $scriptblock ($level + 1)
			} elseif ($item -is [OrganizationalItemData]) {
				recurseTridionItems $core $fullItem $scriptblock ($level + 1)
			}
		}
	}
}

... this will take care of all the tree walking. For an example to show how you might use this, I've written a script block that outputs the Title of the item, indented based on the recursion level.

EDIT: my first version of this function didn't re-read the items that come from GetList. It worked fine for the trivial case of listing the titles but as soon as I tried anything more interesting, I discovered that GetList returns objects that are only partially loaded. This is apparently by design, as the documentation mentions it.

recurseTridionItems $core $null {param($item,$level)"`t" * $level + $item.Title}

On my system, this produces output like this:

_Empty Master
        Building Blocks
                Default Templates
                        Outbound E-mail
                                Generate Plain Text E-mail
                                Outbound E-mail Post-processing
                                Outbound E-mail Pre-processing
                                Generate Plain Text E-mail
                                Outbound E-mail Post-processing
                                Outbound E-mail Pre-processing
                                Set Output Item By Email Mode
                                Tridion.OutboundEmail.Templating.Templates
                        SDL External Content Library
                                Adjust SiteEdit 2009 markup for External Content Library i
                                Adjust SiteEdit 2012 markup for External Content Library i
                                Resolve External Content Library items
                                Search External Content Library items
                                Tridion.ExternalContentLibrary.Templating
                        Component Query
                        Convert Html to Xml
                        Convert Xml to Html
                        Default Finish Actions
                        Dreamweaver Region Selection
                        Enable inline editing for content
                        Enable inline editing for Page
                        Extract Binaries from Html
                        Image Resizer
                        Link Resolver
                        Publish Binaries in Package
                        Default Component Template
                        Default Component Template for UGC
                        Default Page Template
                        Default Page Template for UGC
                        Activate Tracking
                        Cleanup Template
                        Component Query
                        Convert Html to Xml
                        Convert Xml to Html
                        Default Dreamweaver Component Design
                        Default Dreamweaver Page Design
                        Default Finish Actions
                        Default UGC Dreamweaver Template design
                        Enable inline editing for content
                        Enable inline editing for Page
                        Enable User Generated Content Processing
                        Extract Binaries from Html
                        Extract Components from Page
                        Image Resizer
                        Link Resolver
                        Publish Binaries in Package
                        Sample XSLT Component Design
                        Target Group Personalization
                        Tridion.SiteEdit.Templating
                        Tridion.Ugc.Templating.DefaultTemplates
                Default Multimedia Schema
        root
01 Definitions
        Building Blocks
                Default Templates
                        Outbound E-mail
                                Generate Plain Text E-mail

I think I'll truncate it there: you get the picture. Obviously, this is a trivial use-case that probably isn't terribly useful on an industrial scale installation. Fortunately, your script-block doesn't have to be a one-liner, and you can easily expand on this technique to meet your own needs. I should think I'll find quite a few uses for it myself. Just one word of caution: this was just a quick hack, and I haven't tested it exhaustively.

Straightforward Powershell scripting with the Tridion core service

Posted by Dominic Cronin at Apr 05, 2013 01:30 AM |

Almost exactly a year ago, I blogged about Getting to grips with the Tridion core service in Powershell. The core service had been around for a while even then, and the point was to actually start using it for some of the scripting tasks I had habitually done via the TOM. In many ways the TOM was much more script-friendly. Of course, that might have had something to do with the fact that it was created expressly for use from scripting languages. The Tridion core service API wasn't. I don't know exactly what they had in mind, but I'd imagine the thinking was that most mainstream users would use C#. Yeah, sure - any compliant .NET language would do, but F#? Nah!

But a year further on, and where are all those scripts I was going to write? I have to say, the comfort zone for scripting is quite different than for writing "proper" programmes. There's huge usefulness in being able to hack out something quickly, and very much a sense that stuff will be intermingled ina-code-is-data-stylee. So when I started actually trying to use the core service for scripting tasks, it sucked pretty hard. There were two main areas of difficulty:

  1. Getting the core service wired up in the first place
  2. Powershell doesn't natively have the equivalent of C#'s using directive to allow you to avoid typing the full namespace of your type.

 

I covered the first point last year. Suffice it to say that currently, I'm still using Peter Kjaer's Tridion powershell module, although at the moment I'm running a local copy, modified to cope with the Tridion 2013 client, and also to allow me to specify which protocol I want to use. (Obviously I don't want to have a permanent fork, so with a bit of luck, Peter will be able to integrate some of this work into the next release of the module.) On a related subject, my experience has been that working with the core service client has some fundamental differences with using the TOM. You could keep a TDSE lying around for minutes at a time, and it would still be usable, even after a method call had failed. The core service, even when you're on the same server, is most definitely a web service. Failed calls tend to leave your connection in a "faulted" state (i.e. unusable), and the timeouts are generally shorter. Once you are aware of this, you can adjust your coding style accordingly, but it adds somewhat to the ritual.

The namespace issue is on the face of it more trivial. OK - so it's a PITA to have to type something like:

$folder = new-object Tridion.ContentManager.CoreService.Client.FolderData

when all you wanted was a folder. You could argue: "well it works, doesn't it? Get over it!". However, I found all this extra verbiage too much of a distraction, not only when reading and editing longer scripts, but also when "knocking off a quick one". After all, what's the point of having a great scripting environment if your one-liners aren't?

So what to do? Well I scoured the Internet, and discovered that Powershell has something called a Type Accelerator. You've seen these often enough, as there are several available by default. For example, you can (and should) type "[string]" when what you really mean is "[System.String]". Unfortunately, creating type accelerators isn't completely straightforward, but No Worries, the Powershell community is vibrant and there are implementations available that take care of it for you. (OK, at the time of writing I know of one that works, but that's enough, eh? My first Googling had taken me to the Type Accelerators module (PSTX) at codeplex. At first this seemed to be useful, but as soon as I moved to Tridion 2013, support for Powershell 3 became a hard requirement. This project is not actively maintained, and it doesn't work in Powershell 3. As I said, it's not straightforward to wire up type accelerators, and the code uses an undocumented API, which changed. Not Microsoft's fault.)

At this point, I went to the Powershell IRC channel (#powershell on freenode) and asked there if anyone knew about fixes or updates. I was steered in the direction of Jaykul's reflection module, available on Poshcode. (Make sure you get the latest version, and beware of the script getting truncated.) Installing modules is a fairly straightforward task: often as simple as dropping the files into a suitably named directory in your WindowsPowerShell modules directory (sometimes you need to "unblock" them) . Here's a shot of what mine looks like: (What you can see is C:\Users\Administrator\Documents\WindowsPowerShell\Modules)

My Modules folder

In there you can see the Reflection module and AutoLoad (which is another module it depends on). Apart from that you can see the Tridion core service module (and Pscx).

With all this in place, you are set to start writing your "straightforward" Tridion scripts. I've chosen to demonstrate this by hacking out a script that will create a default publication layout for you. It will be a handy tool to have on my research image, but mostly it's to show some real-world scripting.

param ($publicationPrefix = "")

$core = Get-TridionCoreServiceClient -protocol nettcp
import-module reflection
import-namespace Tridion.ContentManager.CoreService.Client

function createPublication {
	Param(
		[parameter(Mandatory=$true)]
		[ValidateNotNullOrEmpty()]
		[SessionAwareCoreServiceClient]$core, 
		[parameter(Mandatory=$true)]
		[ValidateNotNullOrEmpty()]
		[string]$title, 
		[string]$key, 
		[string[]]$parents,
		[switch]$Passthru
	)
	write-host "Creating publication $title"
	$newPublication = $core.GetDefaultData([ItemType]::Publication,"",$null)
	$newPublication.Title = $title
	if ($key -eq [string]::Empty){
		$newPublication.Key = $title
	}
	else {
		$newPublication.Key = $key
	}
	foreach ($parent in $parents){
		$link = new-object LinkToRepositoryData
		if ($parent -match "^tcm:"){
			$link.IdRef = $parent
		} elseif ($parent -match "^/webdav"){
			$link.WebDavUrl = $parent
		} else {
			continue
		}
		$newPublication.Parents += $link
	}
	if ($Passthru){
		$core.Create($newPublication, (new-object ReadOptions))
	}
	else {
		$core.Create($newPublication,$null)
	}
}

function createFolder([SessionAwareCoreServiceClient]$core, [string]$parentId, [string]$title, [switch]$Passthru){
	write-Host "Creating folder $title"
	$newFolder = $core.GetDefaultData([ItemType]::Folder, $parentId, $null)
	$newFolder.Title = $title
	if ($Passthru){
		$core.Create($newFolder, (new-object ReadOptions))
	}
	else {
		$core.Create($newFolder, $null)
	}
}

function createStructureGroup([SessionAwareCoreServiceClient]$core, [string]$parentId, [string]$title, [string]$directory, [switch]$Passthru){
	write-Host "Creating Structure Group $title"
	$newStructureGroup = $core.GetDefaultData([ItemType]::StructureGroup, $parentId, $null)
	$newStructureGroup.Title = $title
	$newStructureGroup.Directory = $directory
	if ($Passthru){
		$core.Create($newStructureGroup, (new-object ReadOptions))
	}
	else {
		$core.Create($newStructureGroup, $null)
	}
}

$chainMasterPub = createPublication $core "$($publicationPrefix)ChainMaster" -Passthru
$rsg = createStructureGroup $core $chainMasterPub.Id "root" "root" -Passthru

$definitionsPub = createPublication $core "$($publicationPrefix)Definitions" -parents @($chainMasterPub.Id) -Passthru
$systemFolder = createFolder $core  $definitionsPub.RootFolder.IdRef "System" -Passthru
createFolder $core $systemFolder.Id "Schemas"

$contentPub = createPublication $core "$($publicationPrefix)Content" -parents @($definitionsPub.Id) -Passthru
$contentFolder = createFolder $core $contentPub.RootFolder.IdRef "Content" -Passthru

$layoutPub = createPublication $core "$($publicationPrefix)Layout" -parents @($definitionsPub.Id) -Passthru
createFolder $core $core.GetTcmUri($systemFolder.Id, $layoutPub.Id, $null) "Templates"

createPublication $core "$($publicationPrefix)Web" -parents @($contentPub.Id,$layoutPub.Id)

The script accepts a parameter which lets me prefix the publications with some name relevant to whatever I'm doing, so if you invoke it like this:

PS C:\code\dominic\tridion> .\CreateDefaultStructure.ps1 "Apple"
Connecting to the Core Service at localhost...
Creating publication Apple 00 ChainMaster
Creating Structure Group root
Creating publication Apple 01 Definitions
Creating folder System
Creating folder Schemas
Creating publication Apple 02 Content
Creating folder Content
Creating publication Apple 03 Layout
Creating folder Templates
Creating publication Apple 04 Web
PS C:\code\dominic\tridion> .\CreateDefaultStructure.ps1 "Banana"
Connecting to the Core Service at localhost...
Creating publication Banana 00 ChainMaster
Creating Structure Group root
Creating publication Banana 01 Definitions
Creating folder System
Creating folder Schemas
Creating publication Banana 02 Content
Creating folder Content
Creating publication Banana 03 Layout
Creating folder Templates
Creating publication Banana 04 Web
PS C:\code\dominic\tridion>

... you end up with publications like this:

The resulting publications

 

I import the Tridion-CoreService module in my Powershell profile, so it's not needed in the script. (As noted earlier, my copy is a bit hacked, as you can see from the fact that I'm passing a protocol parameter to Get-TridionCoreServiceClient). I don't import the reflection module by default, so this is done in the script, followed immediately by "import-namespace Tridion.ContentManager.CoreService.Client", which is the magic from the Reflection module that wires up all the type accelerators. Once this is done, you can see that I can simply type [ReadOptions] instead of [Tridion.ContentManager.CoreService.Client.ReadOptions], and so on. Much better, I think! :-)

If you're wondering about the -Passthru switch on my functions, this is a powershell idiom that lets you indicate whether or not you are interested in the return value. In Tridion, this is controlled by whether or not you pass a ReadOptions argument. Perhaps obviously, the Read() method wouldn't make any sense if it didn't return anything, so a $null works fine - I'm still agonizing over whether it would be more stylish to pass a ReadOptions anyway. What do you think?)

Actually that's a good question. What do you think? I'm still trying to find my feet in terms of the correct idioms for this kind of work. Let's get the debate out in the open. Feel free to say mean things about my code (not obligatory). I've got a thick skin, and I'd genuinely value your feedback, especially if you think I'm doing it wrong.

Dominic is no longer a geek

Posted by Dominic Cronin at Mar 03, 2013 11:08 PM |

For as long as I've run a web site at www.dominic.cronin.nl, I've run the server at home, and used the free services of DynDns to take care of the fact that my Internet Service provider assigns my IP address dynamically.The way this works is that DynDns have a list of domains that they will let you use for free by assigning you a sub-domain. One of these domains (perhaps the most famous) is "is-a-geek.org". For the last several years, I have had the use of "dominic.is-a-geek.org". For free. So thanks DynDns. There's a protocol for automatically updating the IP address associated with this name, so should I ever get assigned a different IP, the dynamic DNS name would stay pointing to the right place. When you type in "www.dominic.cronin.nl" into your browser, it actually goes to the DNS services at Qweb somewhere near Rotterdam, who host the records for my "real" domain name. The response you get from there includes a CNAME record pointing at dominic.is-a-geek.org (or at least it did), and for all practical purposes, you end up at the right place.

Of course, it's great that DynDns offer these free services, but sooner or later they want people to upgrade to their paid services, so for the last few years they've made it so that you have to "refresh" your free name every month. They send an email, you click on a link, and all is good. At least until today, when i realised that not only had my free name expired, they also wouldn't give it back to me if I wanted to stay a cheapskate. They had obviously cottoned on to the fact that the is-a-geek names are desirable (to some people). So there it is. My long-standing use of dominic.is-a-geek.org is over. The end of an era, I suppose.

For what it's worth, I'm still using a free DynDns account, but now it's "dominic.dyndns-home.com". Doesn't have the same ring about it, though. :-(

Fortunately - these days, QWeb have an excellent web administration interface, and within half an hour of realising the problem, I had it all joined up and working again. So I might be QWeb's most insignificant customer - it's hard to imagine anyone buying smaller amounts of service from them - but I'm definitely a happy customer.

As for DynDns - sure, I'm crying inside, but really - you can't accept free service for years and then whinge about it. It wouldn't be right, especially as they are still keeping my site running.

Alt text, title text and web content management

Posted by Dominic Cronin at Mar 03, 2013 10:35 PM |

I've been reading Alvin Reyes' recent blog post about "alt" text for images, and how you should manage it in Tridion. That triggered me to think about a few of these issues, and I'd like to respond to Alvin's post, and perhaps also broaden out the discussion a bit.

The typical scenario he describes is that you have a "content" component which links to a multimedia component representing an image. In the web site output, of course, the alt text needs to be associated with the image, but Alvin raises some questions over where we should manage this data in the CMS. He starts off by saying that the "Common practice" (of putting the alt text in the metadata of the multimedia component) is wrong, and that instead you should put the alt text in the content component. OK - so he then goes on to show that he understands the practical benefits of keeping the alt text with the image, and suggests a couple of other approaches, including putting the data in both places.

What Alvin describes may indeed be good practice for some organisations, but so much depends on the requirements. I think perhaps the most important take away from this discussion might be that unless you can elicit accurate requirements, you are going to get this wrong. Let's face it, the "common practice" technique is pretty solid. You have metadata that describes an image, so you manage it with the image. This isn't just an 80:20 rule, it's a 99% rule. The cases where you need multiple descriptions of the same image are rare in practice. Perhaps the obvious exception is where you're using the image as a link. Stop there a second... I mean the case where you're using a meaningful image to link to something and the description of your linking image is inappropriate for your link target.

OK - before I get carried away, let's deal with this. Firstly - there ought to be various distinct multimedia schemas in your site, intended for different purposes. Having a single "Image" schema is probably an anti-pattern. Tridion schemas have great support for constraining the choice of multimedia schema in helpful ways. (Although maybe it's weaker in Rich Text format areas). If you really, truly have a site where simple 80:20-rule alt text down on the bare metal doesn't work, then you can probably isolate the difficult cases to a particular image type or types. Obviously, you will want a schema for layout images (Lime green bullet, gothic top left corner, and so forth.) This schema should not allow for alt text, and the templating should always emit an empty alt attribute.

Then let's imagine we have a News schema, which allows for a relevant picture. When displaying the detail view of the news item, you obviously want the alt text that describes the image. "President Obama concedes defeat to the President of the NRA, at their meeting at the Whitehouse". When you render the thumbnail variant of the same image in a list of news items, the alt text will tell you that it's a link, and use the title of the news item. "Link to: Hope dies for gun control as Obama confirms his commitment to 2nd Amendment". You don't need to store any extra metadata to do this. Your multimedia schema (News event image) has exactly the metadata it needs, and your news item schema allows you to use multimedia components of this schema for adding a picture of the news event being described. (Should you also have a distinct schema for pictures whose only purpose is to look nice?)

Of course, there are sites where a high value is placed on accessibility, and where they'll do everything. Often these are government sites, or those of large enterprises that can afford the investment in keeping a squeaky clean image on how they treat the disabled. This still isn't the majority, though - by any means. The rest don't want to pay for it, and by that, I don't just mean they don't want to pay for the extra complexity in implementing all the bells and whistles. Perhaps even more to the point, they don't want to invest the extra effort that it will cost their "content" department to actually work with all this extra detail. To achieve an accessible web site, you need more than just clued-in technologists. The web content people need a pretty sophisticated understanding of the issues too. For alt text, they need to know when it should be empty, and they need to know how to write it with the right amount of context when it is needed. Generally, if you have optional fields in your schema, you'll need fairly good editorial control to ensure that it gets filled in at all. When your implementation has some sort of fallback mechanism that puts in a default when you leave it empty, how are you going to deal with that? An extra checkbox next to the alt text field that says "Yes, I really meant to leave it empty". (I'm joking, but if you're doing the site for a blind peoples' charity, you might even go to these extremes). So even with all the complexity built in to your implementation, you're still going to have to send all the content workers on courses where they get to play with screen readers and live for a day like the visually disadvantaged part of their audience, and then you're going to have to repeat this often enough to keep it real next year, and the year after.

So - most organisations aim way lower. If I had a Euro for every professional web content worker I've met who didn't know the difference between title text and alt text, I could probably go on a nice holiday. This is by no means a tirade against the content workers. Most of them are working very, very hard just keeping their sites updated, and it's no wonder they just want the CMS implementation to magically take care of all that accessibility stuff. If you put a separate field for alt text and title text, or a separate field for alt text in a different context, they will probably write something straightforward, paste it in all those places, and then curse you (again) for an idiot for making them do that last step (again). And then they'll go home and tweet that they hate Tridion.

So again, it's about the requirements. Requirements, requirements, requirements!. If your organisation is very, very keen on getting this stuff right, then your shiny new implementation isn't going to help them unless they plan for those training sessions for the web team, and then allow the content team to adjust their estimates on every new piece of content to allow for all that extra effort.... It's half past four in the afternoon, and someone in marketing or communications has emailed them a word document with the text of the press release that absolutely must hit the site at midnight tonight. Knocking-off time is half past five, but what the heck, we'll stay late to show commitment, but really, if I'm staying late it had better be for something more profound than faffing around with the third variation of the alt text. The chief editor might just bother to look for a tooltip on the image if they are on the ball, and the rest of the team are already in the pub. Whatever!

So really - if you are in that elite group that wants to do full-on accessibility, and understands and is willing to deal with the implications, good for you. Alvin's "ultimate practice" might be for you, but that'll be OK, because the team will know enough to be able to work with it. If, on the other hand, you want to do the right thing (so presumably you'll go at least as far as some moderately good alt text) the best bet might be to just stick with putting the alt text in the image component, and re-using this text for the title attribute. At least, make sure you have the requirements discussion, because the costs are very clearly not just one-off implementation costs. Even that copy-paste-paste-paste cycle will cost them time and patience every single day. Anyway - my point is that the web content management implementation needs to support the actual practices that people want or expect to follow in their day to day work.

While we're on the subject... when exactly was the last time you sat down with the person responsible for the web site and discussed the need for quote tags, or longdesc attributes? Isn't it about time?

Tridion Practice moves to GIT

Posted by Dominic Cronin at Mar 03, 2013 01:10 PM |
Filed under:

Back when Tridion Practice began in 2011, most of us were happy enough to use Subversion as the underlying version management system. Since that time, distributed versioning systems have shown their usefulness, and if I'm starting a new project these days, I generally begin by setting up a GIT repository. Tridion Practice is hosted at Google Code, and since those early days, they have added support for GIT, so it was just a matter of time before we ended up moving. To be honest, this was a bit of a dry run for moving the Tridion Power tools on to GIT. The power tools project has far more active committers working on code, so there are even more benefits to be had there. We discussed it in the team, and at first there seemed to be a consensus that it would be good to do this before SDL Tridion release their "2013" product, which will inevitably bring changes to the power tools: most likely a new branch at the least. In practice, getting a team that size all saluting the same flag is a bit more difficult, and sometimes consensus is more valuable than progress.

Git comes with some pretty solid tools for migrating from Subversion, and the Google Code support pages have a guide for migrating one of their projects from Subversion to GIT. Even with that guide, I found I had to read around the subject a bit first before having the confidence that I could follow the instructions well enough It's funny how much you come to rely on your source repository "just working", which leads to an extra bit of care in doing the migration. But, having taken that care, I can now report that the switch over went without a hitch.