Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / weblog

Dominic Cronin's weblog

Showing blog entries tagged as: Free Software

Connecting to Microsoft SQL Server Developer from Tridion Content Delivery

I've recently been setting up a development image for SDL Web 8.5, and as it's only for use on my development rig, it's fair game to use Microsoft SQL Server Developer edition. It's not supported by SDL, but it's close enough to make it a reasonable risk for my purposes. I got the databases set up and the content manager installed OK, so I moved on to the content delivery stack. 

First I hacked together a database test script to make sure I had all the logins correct etc. I've done it this way for years, and you may have seen my blog about it quite a long time ago.  Everything seemed fine. 

I'd started with the Discovery service, and I'd configured the cd_storage_conf.xml with the relevant database settings I'd just tested. How hard could it be? Except that it didn't work. I got messages in the logs telling me to check my firewall. Doh! Off I went and opened up the firewall ports for my microservices (which I'd forgotten to do) and also 1433 for MSSQL. Still no joy. 

Somewhere along the way I'd also disabled loopback checking and double-checked a bunch of other things that can cause trouble. No joy. 

I went back to my database test script a few times. It uses a System.Data.SqlClient.SqlConnection to execute a simple command. The connection string specifies '(local)' as the server. I'd had trouble with using '(local)' in the cd_storage_conf.xml in a previous version of Tridion, so I had specified 'localhost' instead, and then when that didn't work, a different name that mapped to the same interface. Still nothing. 

The troubling thing was that the test script worked fine. Why was that, when Tridion's java stack had trouble doing the same thing? I should have cottoned on to this way earlier, but eventually I started checking to see if there was actually anything listening on 1433. No there wasn't. Well that helped. And then I started poking around in the network configuration of SQL Server. Sure enough: TCP/IP wasn't enabled. I'm still not sure if this is a Developer edition thing. I seem to recall having come across it before. I'm not the only one. Now that I know the answer, finding a suitable Stack Overflow answer is easy! Maybe I'd had trouble with SQLEXPRESS. 

Anyway, at least that explained why my test script worked OK. The SqlConnection client sees '(local)' and is then able to attempt a named pipes or shared memory connection as well as TCP/IP. The java client, on the other hand, doesn't have this repertoire of options and if TCP/IP fails, it's over.

Anyway - now it's fixed. Just time for a quick Note To Self, and on with the rest of my system. 

Mashing your scanned JPGs back into one big PDF

It happens more often these days. You get some form sent to you as a PDF. You print it out, and fill it in, and then you want to scan it back in and send it back. For one reason or another, my scanner likes to scan documents to JPEG files: one file per scan. Grr... 

In the past, I've used some PDF printer driver or other to solve this problem, but under the water they pretty much all use ghostscript, so why not do it directly. I used to install cygwin on my Windows machines to get access to utilities like this, but these days, Windows embeds a pretty much functional Ubuntu. 

So yeah - just directly using ghostscript. How hard can it be? Well it turns out that a bit of Googling leads you to typing some pretty gnarly command lines, especially since I had scanned a 15 page document into 15 separate JPG files. And then Adobe Acrobat didn't understand the resulting document. No good at all. So then I googled further and found this

It turns out that by installing not only ghostscript but imagemagick, the imagemagick "convert" utility knows how to do exactly what you want, presumably by enlisting the help of ghostscript. So simply by cd'ing to the directory where I had my scans, this...

$ convert *.JPG outputfile.pdf

... did the trick. Pretty neat, huh? Note to self.... 

System refresh: new architecture for www.dominic.cronin.nl

It's taken a while, and the odd skinned knuckle and a bit of cursing, but I can finally announce that this site is running on...erm.. the other server. Tada! Ta-ta-ta-diddly.... daaahhhh!!!!

Um yeah - I get it. it's not so exciting is it really? The blog's still here, and it's got more or less the same content. It doesn't look any different. Maybe it's a tiny smidgin faster, but even that's more likely to do with the fact that we switched over to an ISP that actually makes use of the glass that runs in to our meter cupboard. 

But I'm excited. Just a bit, anyway. Partly because it's taken me months. It needn't have, but it's the usual question of squeezing it into the cracks between all the other things that need to get done in life. That and the fact that I'm an utter cheapskate and I don't want to pay for anything. There's also plenty not to be excited about. As I said, the functionality is exactly as it was. The benefits I get from it are mostly about the ability to do things better going forward. 

So what have I done? Well it all started an incredibly long time ago when I started tinkering with docker. I figured that the whole containerisation technology thing had such a lot of potential that I ought at least to run docker on my own server. After all, over the years, I'd always struggled with Plone needing to have a different version of Python than the one available in the current Gentoo ebuilds. I'd attempted a couple of things, including I think an early version of what became LXC, but then along came virtualenv, which made the whole thing moot. 

Yeah, well - until I wanted to play with docker for itself. At this point, I just thought I'd install it on my server, and get going, but I immediately discovered, that the old box I was running was 32-bit, and docker is just far too hip to run on anything so old-fashioned. So I needed a new server, and once I'd realised that, that's when the whole thing started. If I was going to have a new server, why didn't I just containerise everything? It's at this point that someone inevitably chips in with a suggestion that if I weren't such a dinosaur, I'd run it on the cloud, wouldn't I? Well yes - sure! But I told you - I'm a cheapskate, and apart from that, I don't want anyone's soul-less reliability messing with my carefully constructed one-nine availability commitment. 

Actually I like cloud tech, but frankly, when you look at the micro-budget that supports this site, I'd have spent all my time searching out a super-cheap host, and even then I'd have begrudged it. So my compromise with myself was that I'd build it all very cloudy, and then the world's various public clouds would be my disaster recovery plan. And so it is. If this server dies, I can get it all up in the cloud with a fairly meagre effort. Still not going to two-nines though.

So I went down to my local high street where there's a shop run by these Indian guys. They always have a good choice of "hardly used" ex-business computers. I think I shelled out a couple of hundred Euros, and then I had something with an i5 and enough memory, and a couple of stupidly big disks to make a raid. Anyway - more than enough for a web server - which is just as well, because pretty soon it ends up just being "the server", and it'll get used for all sorts of other things. All the more reason to containerise everything. 

I got the thing home, and instead of doing what I've done many times before, and installing Gentoo linux, I poked around a bit on the Internet and found CoreOS. Gentoo is a masochist's delight. I mean - it runs like a sports car, but you have to own a set of spanners. CoreOS, on the other hand, is more or less maintenance free. It's built on Gentoo's build system, so it inherits the sports car mentality of only installing things you are going to use, but then the guys at CoreOS do that, and their idea of "things you are going to use" is basically everything that it takes to get containers up and keep them running, plus exactly nothing else. For the rest, it's designed for cloud use, so you can install it from bare metal to fully working just by writing a configuration file, and it knows how to update itself while running. (It has a separate partition for the new version, and it just switches over.) 

So with CoreOS up and running, the next thing was to convert all the moving parts over to Docker containers. As it stands now, I didn't want to change too much of the basics, so I'm running Plone on a Gentoo container. That's way too much masochism though. I'd already been thinking I'd do a fresh one with a more generic out-of-the-box OS, and I've just realised I can pull a pre-built Plone image based on Debian (or Alpine). This gets better and better. And I can run it all up side-by-side in separate containers until I'm ready to flip the switch. Just great! Hmm... maybe my grand master plan was just to get to Plone 5! 

The Gentoo container I'm using is based on one created by the Gentoo community, which you can pull from the Docker hub. Once I found this, I thought I was home and dry, but it's not really well-suited to just pulling automatically from a docker file. What they've done is to separate out the portage tree into a separate container. This is smart, because you are unlikely to want the whole of portage in your container for any given purpose that makes you want to run Gentoo. What you do instead is mount the portage data using docker's --volumes-from argument. With it mounted, you can run emerge and install whatever packages you need, and then at runtime you get to run a much slimmer system. Which is great, but it means you have to create and store your own image manually rather than using a dockerfile. (At least, that's how it ended up for a noob like me, once I realised that dockerfile doesn't have an equivalent of --volumes-from.) 

My goal was to set up CoreOs to automatically pull the docker images it needed, and run some setup commands. This meant that I'd need to have my personalised Gentoo image available somewhere. Some of the data in there was sensitive, so I went looking for a private Docker registry that I could upload it to. There are plenty of private registries, but most of them aren't free. (If you don't mind the whole world pulling your containers, then free registries abound.) I eventually found https://canister.io/, which suited my needs. That said, my needs aren't much. If I ever need an alternative to canister, I'll probably look at Google Cloud Platform, which isn't free but has a private container registry where you only pay for storage and data egress, at pretty reasonable rates. Or I could just host it myself, but that's maybe too many eggs in the same basket. 

Meanwhile, my very next step ought most probably be to get backups sorted out. The "Dockerish" way to do this is to run up yet another dedicated container to deal with just this concern. Then if I want to host it separately, and my backup approach changes, nothing else needs to. Once I have the backups sorted out, it will definitely be worth the while to tidy things up so that I really can just push to the cloud if needs be. The way it's set up now, I could be up and running again very quickly but we're probably talking hours rather than seconds. 

I'm really enjoying the flexibility that containerisation gives me, although it's definitely important to get into the right mindset. Being able to build containers that will run on a really generic platform is quite liberating.

Spoofing a MAC address in gentoo linux

I spent a few hours this weekend fiddling with networking things at home. One of the things I ran into was that the DHCP server provided by my ISP was behaving erratically. Specifically, it was being very fussy about giving out a new lease. It would give out a lease to a Windows 7 system I was using for testing, but not to my Gentoo server. At some point, having spent the day with this kind of frustration, I was ready to put up with almost any hack to get things running. Someone on the #gentoo IRC channel suggested that spoofing the MAC address that already had a lease might be a solution. Their solution was to do this: 

ifconfig eth0 down
ifconfig eth0 hw ether 08:07:99:66:12:01
ifconfig eth0 up

Here, you have to imagine that eth0 is the name of the interface, although on my system it isn't any more. (Another thing I learned this weekend was about predictable interface names.) You should also imagine that 08:07:99:66:12:01 is the mac address of the network interface on my Win7 system. 

The trouble with this is that it doesn't integrate very well in the standard init scripts that get things going on a Gentoo system. Network interfaces are started by running /etc/init.d/net.eth0 (although that's just a link to another script). The configuration is to be found in /etc/init.d/net where you can add directives that control the way your network interfaces are configured. The most important of these are the ones that begin with "config_". For example, to set up a static IP for eth0, you might say something like: 

config_eth0="192.168.0.99 netmask 255.255.255.0 brd 192.168.0.255"

or for DHCP it's much simpler: 

config_eth0="dhcp"

So my obvious first try for setting up a spoofed MAC address was something like this:

config_eth0="dhcp hw ether 08:07:99:66:12:01"

but this didn't work at all. Anyway - after a bit of fiddling and more Googling (sorry - I can't remember where I found this) it turned out that there's a specific directive just for this purpose. I tried this

mac_eth0="08:07:99:66:12:01"
config_eth0="dhcp"

It works a treat. Note that the order is important, which is obvious once you know it I suppose, but wasn't obvious to me until I'd got it wrong once. 

The good news after that was that for an established lease, everything worked rather better.

Managing the Tridion Core service powershell module as a git submodule

Posted by Dominic Cronin at Sep 06, 2015 01:50 PM |

N.B. Peter has changed the structure of the module (as he has every right to do, and I'm not complaining) - what this means is that this blog post is pretty useless other than as an exercise in poking at things. Maybe I'll figure it out, but in the meantime, assume that this technique won't work. 

I spend quite some time fiddling with various powershell implementations on my Tridion image. Whenever there's a place where I do experimental things like this, I run the risk that I'm going to break something so, at the very least, I usually do a quick "git init" in the directory, add the files and commit them. Then I have the benefit of version diffs and rollbacks if I need them. The next step comes when I realise that it's something I'm going to work on over a longer time, and that I really would prefer not to lose. At this point, I usually go on to my linux server and init a bare git, and then push from whereever I'm working.

Today I reached this second phase with the WindowsPowerShell directory of the Administrator account on my Tridion image. (It's about time, because I'm busy preparing a talk for the Tridion Developer summit in a couple of weeks, and well, losing my scripts would put a kink in my plans, to say the least.

In any case, I'd realised that I was running quite an old version of Peter Kjaer's Tridion-CoreService module. This module is the basis of pretty much any effort to use the Tridion core service from the powershell, and as this is the subject of my upcoming talk, I figured I should at least be doing my demos on the current version.

If you go to the github page for the module, you'll see that Peter's provided installation scripts which will help you to get up and running, but of course, if you have git installed, it makes just as much sense to clone the module directly. The only problem I had was that Modules are normally located in the Modules directory under the WindowsPowerShell directory. (You can add other locations to env:PSModulePath, but for what I wanted, that wasn't ideal.)

Fortunately, GIT is widely used for projects that make use of other projects, and there is very good support built in, by way of git-submodule. As my main git repository for the powershell stuff is directly in the WindowsPowerShell directory, all I needed to do was add Peter's module as a submodule with the right path.

In fact I just clicked on the menu option in Tortoise Git, but the basic command looks something like this:

git submodule add --name Modules/Tridion-CoreService git@github.com:pkjaer/tridion-powershell-modules.git Modules\Tridion-CoreService

With this in place, git understands that the Tridion-CoreService code belongs to Peter's module, and if he releases a new version, I can just pull. And of course, my own changes go in my own repository. Adding a submodule adds a .gitmodules file in your repository, so if I ever clone my WindowsPowerShell repository into another server, the location of Peter's repository can be retrieved, and the files pulled from there.

One word of warning. This is not the official release process for the Tridion-CoreService module. That is described here. As the module is pretty much a one-man affair, it's not unreasonable that there's only the master branch, so pulling from it is at your own risk. Personally I'm happy with the small risk, as it helps me to keep my development system a bit tidier - and heck - if it breaks, we'll fix it!

.

Tridion bookmarklet challenge: the entries

Back in July, I issued the Tridion Bookmarklet challenge, and shortly afterwards set up a question on meta.tridion.stackexchange.com to manage the entries and the voting. The closing date for entries was 31 Dec 2014, so we've now had all the entries: a grand total of 13. Over the next month, we'll be collecting votes to see which entry, in the eyes of the community, was the best.

For the first few months of the challenge, there was very little activity. I even got a bit nervous that we'd have hardly any entries. I needn't have worried; it hotted-up nicely towards the end.

So what did we get for entries? Quite a mixed bag actually - and somewhere in there, quite a few interesting technical insights.

The first entry wasn't a bookmarklet at all. Roel Van Roozendaal chose to create a Chrome browser extension instead. This was a re-implementation of the delete message centre messages functionality that had kicked off the whole bookmarklet discussion in the first place, so nothing new from a functional point of view, but his article shows clearly how to wrap up the functionality in an extension. Certainly there's enough there to inspire you if you were thinking of doing something similar.

We also had an entry that was a bookmarklet, but didn't extend the GUI. Well it was already clear that the rules were pretty relaxed - we're more interested in useful, interesting, inspiring... even if I've been known for pedantry in other contexts :-) So go and check out the entry by Jan Horsman (Jan H), which makes it simple to log in to the Tridion documentation site.

We had multiple entries from several people. Robert Curlette (robrtc) entered, Count Items in View, and later Get Schema Id and Title. The first of these really showed the community process in action, with Robert reaching out with a question on Tridion Stack Exchange (TREX). In his second entry, Robert also bravely 'fessed up to his ignorance on TREX and was rewarded. (See how that works, people? He now knows more than he did, and he's helped the rest of us. We need askers as well as answerers - and fortunately Robert does well in both categories.)

Chris Morgan came up with a couple: Get WebDAVUrl and Open Schema. Both of these are going to be useful to Tridion developers doing their daily work.

Frank Taylor (paceaux) also made two entries. He started with PubUp, which simply lets you navigate to where an item is in the BluePrint. Once he'd entered PubUp, the inevitable community interaction took hold, and he was diving into the Anguilla framework with the help of Nuno Linhares. Lo and behold, out of this process, he was inspired also to enter his AnguillaMediator bookmarklet.

Pankaj Gaur created a bookmarklet that would display the file information of a multimedia component. That's not displayed by default in the Tridion GUI, so there will definitely be people who want this. He also did a bookmarklet that would localize an item.

Not everyone did two :-) Rob Stevenson-Leggett did one to rename a Tridion item, (which was always an awkward thing to do), and Jonathon Williams created Get Creation Info. Don't worry guys - one entry is quite enough, and these are both useful offerings and even with two entries, only one can win.

Alexander Orlov (UI Beardcore) also made a single entry, with his Multiple Upload bookmarklet. Alexander is also notable as the person credited with the most "assists" in the challenge. Several people have made use of what we're coming to know as the Beardcore hack to acquire a reference to the Anguilla API in the Tridion GUI.

So thanks to you all for taking the time and trouble to create these bookmarklets and take part in the challenge. It's been great to see the spirit of friendly rivalry, with people learning from each other and even helping to improve competing efforts. I don't know who is going to win, but every single one of you has helped the community by spreading your knowledge and expertise.

Vim Windows weirdnesses

Posted by Dominic Cronin at Dec 22, 2014 09:43 PM |
Filed under: , ,

This is just a quick note-to-self to remind me of the stuff I always forget when installing plugins and the like for Vim on a Windows machine. So of course this means gVim. The confusing thing is always that the documentation for everything refers to your ~/.vim directory. And - you haven't got one. Here's the note to self.

Your ~/.vim directory is called vimfiles

And ~ is probably somewhere like C:\Users\dominic - your .vimrc will be there too, so you can find it by running vim and doing

:echo $MYVIMRC

The Tridion bookmarklet challenge: an update

A couple of weeks ago, I issued the Tridion bookmarklet challenge. OK - that sounds pretty grand, but really it's not. It's only that having come across the idea of using bookmarklets to enhance the Tridion GUI, this seemed like a great chance to see what inventiveness people could come up with - so why not a challenge? The day after I issued the challenge, my web server went down, so the initial burst of publicity was kind of wasted. Anyway - I hadn't really thought through the details then, so now's my chance to flesh it out a bit, and attempt another deluge of publicity.

So how is it going to work? Here's how:

What you need to do

  • Create a useful and well-constructed bookmarklet which enhances the Tridion GUI in some way. It should work with SDL Tridion 2013, but you may wish to consider making it work for 2011 as well.
  • Publish it on-line. You can put it on your own web site, or host it somewhere else. (SDL Tridion world, Google code... whatever - I don't care, as long as it's available via the Internet.)
  • Publicise it. You should tweet a link to your entry using #tridionlet. You also need to link to it in an answer to this question on meta.tridion.stackexchange.com. Use whatever other means are at your disposal to publicise your entry.
  • Apportion the credit correctly. You can enter as a team if you like - so if one person is responsible for the functional aspects, and another for  hacking out gnarly javascript, you should say so. Just as long as it's clear who should get the kudos.
  • Get this all done by the end of 31 December 2014.

What happens once you've done this?

The judging will be done by the community. This is the reason why you need to answer the question on meta.

We'll wait until the end of January, which should give people a chance to finish their New Year celebrations, and actually read the code... maybe try the bookmarklets out in real life. There'll be a burst of publicity during January to make sure people think about voting, and then whoever has the most votes by the end of 31 January 2015 will be the winner.

Is it better to wait until January to vote?

Yes - the entries might be improved right up to the deadline. Who knows? Also - some people may prefer not to put their entry on-line until quite late on, or may feel pressured by seeing other entries getting more votes (or less). Entrants: remember that many people will wait until January to vote, so don't read too much into it until then. Voters: I'm putting you on your honour to vote for the best entries. So please don't just vote for your mates, and please don't vote to show some kind of company loyalty. This is personal, not corporate.


One-nine availability

Posted by Dominic Cronin at Aug 16, 2014 09:43 AM |

A couple of weeks ago, this site went down. That happens from time to time. It went down just as I left the country to go on holiday, and it could only be fixed via physical access, so it was down for a week. At least one person has commented that maybe I should stop with this silliness of running my own server on an old Gentoo box in the meter cupboard, and get some proper hosting.

The thing is, that when I started this blog, some years ago now, I went through a detailed requirements analysis, and a full MoSCoWMeh matrix. If you aren't familiar with MoSCoWMeh, this is an enhanced variant of the well-known MoSCoW technique, which also accommodates the needs of private and hobby-run systems.

The requirement for reliable hosting and 5-nines up-time was classified as Meh, and has remained so since. So now you know.

Editing Tridion templates with a little Wasavi sauce

Posted by Dominic Cronin at Jun 23, 2014 11:05 PM |

This post is dedicated to all the brave Tridion hackers that struggled through the vbScript years, editing endless templates, most likely in IE6 or worse. How many of us, back in the day, wished for better editing support in the browser? These days, of course, if you have any sense, most of your complex code will be safely tucked away in Visual Studio, but still... who doesn't still occasionally have to carve up a Razor building block, or even DWT? Sure the browsers have all become awesome, but when all is said and done, you're still editing that template in a textarea. Cheer up - that's all about to change!

I first came across Wasavi a couple of months ago - I can't even remember what the context was, but I mentally classified it as an interesting curiosity and moved on. It was still installed in my browser, but I forgot about it until today, when it suddenly came in useful. Oh hang on a sec... what's Wasavi? It's a browser extension that turns each textarea into a vi editor. Cute eh?

OK - I can see what you're thinking... oh yeah - vi. Bloody geeks. And fair enough - I cut my teeth on vi in 1989 running on a nasty old mainframe running Multics. I still like it as an editor, but that probably only indicates a certain level of perversity. So why am I pestering you, my fellow Tridion people, with it? Well - of course, some of you will be as comfortable as I am in grungy old-skool editors. For the rest of you, it may still be useful enough to have it installed in your browser (Chrome, Firefox or Opera... so the first two, eh?)

I offer you.... tada.... Global Search and replace!! As you can see in the screenshot, something awful has happened to my template, and all the instances of "ComponentField" have unfortunately been transformed to "Banana". How to fix this?

<Ctrl>-<Enter>
:%s/Banana/ComponentField/g<Enter> 
:wq

Ctrl-Enter opens up the wasavi view. The line beginning with :%s means "on every line, substitute all instances of Banana with ComponentField", and :wq is vi-speak for write-quit, which is the same as save and close, which returns you to the normal textarea view.

This is one example, albeit a fairly handy one, but vi is capable of much more. The next time you're looking at some awkward editing task - think of wasavi, and just Google for the relevant vi commands.

Edit: Just thought of another cool way to use this.... vi has parenthesis-matching, so if you have to unscramble a badly formatted piece of code, just park on a paren, or any other kind of bracket, and hit % to go to its counterpart. (Full-on vim has good auto-formatting features too, but I'm not sure if the browser implementation does much more than the basics... still - useful.)