Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / weblog

Dominic Cronin's weblog

Showing blog entries tagged as: note to Self

Vim Windows weirdnesses

Posted by Dominic Cronin at Dec 22, 2014 09:43 PM |
Filed under: , ,

This is just a quick note-to-self to remind me of the stuff I always forget when installing plugins and the like for Vim on a Windows machine. So of course this means gVim. The confusing thing is always that the documentation for everything refers to your ~/.vim directory. And - you haven't got one. Here's the note to self.

Your ~/.vim directory is called vimfiles

And ~ is probably somewhere like C:\Users\dominic - your .vimrc will be there too, so you can find it by running vim and doing

:echo $MYVIMRC

What not to do when upgrading to Grub2

Posted by Dominic Cronin at Feb 08, 2014 04:10 PM |

I'd been following the Gentoo Wiki guidance on upgrading Grub, and had been taking it very carefully. I'd worried about getting this right, as getting it wrong would leave me with a brick, so I'd been very pleased to see the notes on using the old bootloader to chain load the new one. That way I could check that my configuration was correct before taking the plunge of installing the new version into the Master Boot Record. I didn't want to automatically generate the new config file, as I didn't trust it. (Rightly so as it turned out, because my initrd files didn't follow the strict naming requirements, so weren't picked up by the config generation script) Anyway - the hand-written config was half a dozen lines long, and the generated one was utterly incomprehensible.

So anyway - I managed to create the config file, and get everything set up for chain loading. I rebooted the server, and bingo - there was the chain loader entry in my "old" boot screen, and when I followed it, I got the new menu and could boot the server. Great stuff! Now it should have been a simple question of running grub2-install, and I'd be finished. So I did this, and then.... the computer wouldn't start. Fortunately I had a grub prompt, so grub was "working" - but it obviously couldn't find its config file. I already knew that with the right incantations it might be possible to get the thing to boot without a config file, and after a bit of googling, I got enough clues to attempt it. (For the record, what I think I'd done wrong was to fail to remount /boot after my chain test and before running grub2-install, with the result that grub then didn't know how to correctly find /boot.)

It took a few attempts, but the command line completion in grub helps a lot. This is what I eventually ended up typing at the grub prompt to get a working boot.

grub > set root=(hd0,1)
grub > linux /kernel-gen-newudev-3.3.8-gentoo root=/dev/sda3
grub > initrd /initramfs-gen-newudev-3.3.8-gentoo
grub > boot

Note that the root for the boot loader is different from the root of the operating system, so you have to specify them separately. Obviously YMMV for the names of the kernel and initrd files, not to mention device identifiers.

But the real advice here is to avoid missing out that crucial mount operation!!

Gentoo emerge dies with 'failed to open /dev/urandom' when wrong default python is configured.

Posted by Dominic Cronin at Jan 22, 2014 12:12 AM |

So there I was - just for fun building my new Gentoo system, when all of a sudden, I wasn't. Building, that is. I wasn't building anything. In fact, part of the motivation for a clean build had been that emerging new things was getting tiresomely fragile. Anyway - here's what happened when I tried an emerge. The interesting part is where it says: Fatal Python error: Failed to open /dev/urandom

>>> Emerging (1 of 18) sys-libs/glibc-2.17
 * Fetching files in the background. To view fetch progress, run
 * `tail -f /var/log/emerge-fetch.log` in another terminal.
 * glibc-2.17.tar.xz SHA256 SHA512 WHIRLPOOL size ;-) ...                                                                            [ ok ]
 * glibc-2.17-patches-8.tar.bz2 SHA256 SHA512 WHIRLPOOL size ;-) ...                                                                 [ ok ]
make -j2 -s glibc-test
make -j2 -s glibc-test
>>> Unpacking source...
 * Checking gcc for __thread support ...                                                                                             [ ok ]
 * Checking kernel version (3.3.8 >= 2.6.16) ...                                                                                     [ ok ]
 * Checking linux-headers version (3.9.0 >= 2.6.16) ...                                                                              [ ok ]
>>> Unpacking glibc-2.17.tar.xz to /var/tmp/portage/sys-libs/glibc-2.17/work
>>> Unpacking glibc-2.17-patches-8.tar.bz2 to /var/tmp/portage/sys-libs/glibc-2.17/work
 * Applying Gentoo Glibc Patchset 2.17-8 ...
 *   0035_all_glibc-2.16-i386-math-feraiseexcept-overhead.patch ...                                                                  [ ok ]
 *   0059_all_glibc-2.19-make-4.0.patch ...                                                                                          [ ok ]
 *   0065_all_glibc-2.18-qecvt-guards.patch ...                                                                                      [ ok ]
 *   0070_all_glibc-2.18-localedef-page-align-1.patch ...                                                                            [ ok ]
 *   0071_all_glibc-2.18-localedef-page-align-2.patch ...                                                                            [ ok ]
 *   0072_all_glibc-2.18-localedef-page-align-3.patch ...                                                                            [ ok ]
 *   0085_all_glibc-disable-ldconfig.patch ...                                                                                       [ ok ]
 *   0090_all_glibc-2.17-arm-ldso.cache.patch ...                                                                                    [ ok ]
 *   1005_all_glibc-sigaction.patch ...                                                                                              [ ok ]
 *   1008_all_glibc-2.16-fortify.patch ...                                                                                           [ ok ]
 *   1040_all_2.3.3-localedef-fix-trampoline.patch ...                                                                               [ ok ]
 *   1055_all_glibc-resolv-dynamic.patch ...                                                                                         [ ok ]
 *   1505_all_glibc-nptl-stack-grows-up.patch ...                                                                                    [ ok ]
 *   1506_all_glibc-2.17-hppa-fpu.patch ...                                                                                          [ ok ]
 *   1507_all_glibc-2.17-hppa-ldso-flag.patch ...                                                                                    [ ok ]
 *   1507_all_hppa-ia64-DL_AUTO_FUNCTION_ADDRESS.patch ...                                                                           [ ok ]
 *   1508_all_glibc-2.17-hppa-futex.patch ...                                                                                        [ ok ]
 *   1508_all_hppa-fanotify_mark.patch ...                                                                                           [ ok ]
 *   3020_all_glibc-tests-sandbox-libdl-paths.patch ...                                                                              [ ok ]
 *   5063_all_glibc-dont-build-timezone.patch ...                                                                                    [ ok ]
 *   6024_all_alpha-fix-signal-thunk-unwind-info.patch ...                                                                           [ ok ]
 *   6230_all_arm-glibc-hardened.patch ...                                                                                           [ ok ]
 * Done with patching
 * Using GNU config files from /usr/share/gnuconfig
 *   Updating scripts/config.sub                                                                                                     [ ok ]
 *   Updating scripts/config.guess                                                                                                   [ ok ]
>>> Source unpacked in /var/tmp/portage/sys-libs/glibc-2.17/work
Fatal Python error: Failed to open /dev/urandom
/usr/lib/portage/bin/phase-functions.sh: line 87:  4204 Aborted                 "${PORTAGE_PYTHON:-/usr/bin/python}" "${PORTAGE_BIN_PATH}"/ilter-bash-environment.py "${filtered_vars}"
 * ERROR: sys-libs/glibc-2.17::gentoo failed (unpack phase):
 *   filter-bash-environment.py failed
 *
 * Call stack:
 *            ebuild.sh, line 714:  Called __ebuild_main 'unpack'
 *   phase-functions.sh, line 993:  Called __filter_readonly_variables '--filter-features'
 *   phase-functions.sh, line 137:  Called die
 * The specific snippet of code:
 *      "${PORTAGE_PYTHON:-/usr/bin/python}" "${PORTAGE_BIN_PATH}"/filter-bash-environment.py "${filtered_vars}" || die "filter-bash-enviroment.py failed"

So what was going on here? Well as it turned out, my system has three versions of python loaded, and Gentoo's portage system (of which emerge is part) seems to rely on you not using python 3. After a short bit of fiddling with "eselect python list" and "eselect python set", to get the default python back to 2.7, the build ran like a charm.

So anyway - this has got to count as the most bizarrely mis-reported error in my most recent years. "/dev/urandom" was working fine. I could start it and stop it ("/etc/init.d/urandom stop" and so forth) and I could use it to access randomness. Why then did I get the "failed to open" message with one version of python, and not with another. Answers on a postcard? Whatever - this was a public service announcement.

Getting IIS Express to run in a 64 bit process, and other fun Tridion content delivery configurations

Posted by Dominic Cronin at Jul 24, 2013 07:55 PM |

In the last couple of days, I've spent far more time than I'd like figuring out how to get a Tridion-based web application to run correctly under Visual Studio. There are three basic choices:

  1. Run it directly using Visual Studio
  2. Run it using IIS Express
  3. Run it using IIS (non-Express version)

As the application is intended to run on a 64 bit architecture, there are some challenges. Visual Studio runs in 32 bit mode, so the first option is out. Using full-on IIS is an attractive thought; you can manually configure the application pool to run in 64 bit mode. Unfortunately, getting a debug session up and running takes more configuration than that. You have to set up the web site correctly, and it was just too fiddly. I ran out of time, or steam or whatever. (Somebody will probably tell me it's easy, and I dare say it is when you know how, and aren't spending time you really should be spending on something else. Any hints are always welcome.)

Of course, with a Tridion site, half the game is making sure you have the correct DLLs in place for the processor architecture you are using. Along the way, I discovered that the quick and dirty way to tell if you have a 32 or 64 bit version of xmogrt.dll (Juggernet's "native" layer) is the size. The 64 bit version comes in at 1600KB and the 32 bit version is about half that at 800KB or so. This varies from version to version, so on a 2013 system, it's 1200ish/900ish KB, but once you get the hang of it, you can tell them apart at sight, which is pretty useful.  The other DLLs are also important, although as far as I can tell, only Tridion.ContentDelivery.AmbientData.dll is hard-compiled for 64 bit architecture, at least on the 2011 system I was working on. The rest of the .NET assemblies are compiled to MSIL, which of course, will run on either architecture.

But I digress. The thing I wanted to blog (and this will definitely be tagged note-to-self) was how to get IIS Express to run in 64 bit mode. By default it runs on 32 bits, but if you follow this link:

http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/3254745-allow-for-iis-express-64-bit-to-run-from-visual-st

... you will find the following nugget of goodness:

You can configure Visual Studio 2012 to use IIS Express 64-bit by setting the following registry key:

reg add HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\WebProjects /v Use64BitIISExpress /t REG_DWORD /d 1

However, this feature is not supported and has not been fully tested by Microsoft. Improved support for IIS Express 64-bit is under consideration for the next release of Visual Studio.

Very handy indeed. Running under IIS express is just one click of the button. Just works.

And by way of a PS. (Post Script that is, not PowerShell) here's how you find the processor architecture of a DLL (This time on my 2013 image.)

PS C:\inetpub\www.visitorsweb.local\bin> [reflection.assemblyname]::GetAssemblyName((resolve-path '.\Tridion.ContentDelivery.AmbientData.dll')).ProcessorArchitecture
MSIL

Well anyway - it's no fun scratching your head over stuff like this. Maybe this helps.

Pacman

Posted by Dominic Cronin at Apr 17, 2013 09:55 PM |

This is mostly by way of a "note to self". I've recently started working at a customer where connecting my computer to their network is not just allowed, but necessary. Once connected, if I want to use the Internet, I have to go through their filtering proxy - presumably to keep the badness of the Internet from their systems (and yes, they do pay a lot of attention to ensuring the machine is virus-free). Previously, when I worked there for a day or two, setting up the proxy was a minor irritation, but as I'm going to be there rather longer, the idea of reconfiguring my networking twice a day started to look pretty unattractive. My first attempt at solving this had been to have a couple of scripts that set up the proxy by making the relevant registry settings, but unfortunately, Windows doesn't pick these up immediately. Yeah - sure - if I could remember to run the scripts before shutting down it might work, but I'm not that obsessive.Or I could get Windows to pick up the settings by opening the various screens... Internet Options... Connections.... LAN Settings... oh wait... there had to be a better way.

It turns out that there's something called a Proxy Auto-configuration file. If you select "Automatically detect settings", then Windows will try and locate one of these on the network using the Web Proxy Auto-discovery Protocol, however the customer in question doesn't do this. My needs were simple enough, though, so I checked the next box down: the one that says "Use automatic configuration script". All that remained was to create the script.

It turns out that you write such things in JavaScript, and it's simply a matter of writing a function which is named in the PAC standard, and using other functions that are made available. Here's what I ended up with (although I'll probably add refinements):

function FindProxyForURL(url, host) {
	var customerProxy = "PROXY 10.62.40.42:1234";

	if (atCustomer()){
		if(dnsDomainIs(host, ".internal.customer.com") || dnsDomainIs(host, "localhost")|| dnsDomainIs(host,".local")){
			return "DIRECT";
		}
		else return customerProxy;
	} else {
		return "DIRECT";
	}
	
}

function atCustomer(){
	return isResolvable("server.not.on.external.dns");
	// or maybe
	// return isInNet(myIpAddress(), "10.62.0.0", "255.255.0.0"); 
}

Nothing fancy, but it works. I suspect I'll find a few edge cases where I maybe have to enhance the script or even configure things by hand, but for now I have the satisfaction of knowing I can just turn up, plug in, and start work.

Mysterious 404 errors showing up in the Tridion message centre

Posted by Dominic Cronin at Dec 19, 2012 11:37 PM |

Today I spent some time setting up a Tridion 2011 Content Manager server. In fact, the content manager had already been installed and had been working fine. Then we'd installed Microsoft Search Server. OK - so it's quite unusual to be doing quite so much all on one server, but this is a customer with minimal needs. Not everyone has 200 servers in the rack! Although Search Server is packaged as a product in it's own right, it's built on Sharepoint, and when you install it, it seems to bring two thirds of Sharepoint with it, including 2 MSSQL instances and three web sites. So to get the benefit of Microsoft's "free" search services, we'll probably have to configure another couple of gigs of RAM. (SFX: Sound of a cash register going "ca-ching" at VMWare headquarters)

Anyway to be fair, the search solution looks pretty good and it definitely does what it says on the box, although it's got about a hundred configuration screens (I haven't actually counted them, though). Well anyway - we'd installed this beast on our previously working Tridion server, and most things were going OK. Until I did an IISRESET, and then suddenly the Tridion CME started to complain about a 404 problem. So when you started the CME, you'd get error messages like:

The remote server returned an error (404) not found

On examining the message centre, I found this message 6 times, along with "Loading list of languages failed" and "Loading list of locales failed". Sure enough, the relevant drop-downs in the User preferences are not  populated.

When I F12'd the browser. (Is there a verb, to F12? There should be.) I could see that the browser wasn't seeing any responses with HTTP status 404. So what was going on?

After digging a bit on the server, I found that there were entries in the web server log like this:

2012-12-19 12:59:41 ::1 POST /WebUI/Models/CME/Services/General.svc/GetListCustomPages - 80 BLAH\Administrator ::1 - 404 0 0 58
2012-12-19 12:59:41 ::1 POST /WebUI/Models/CME/Services/General.svc/GetListFavorites - 80 BLAH\Administrator ::1 - 404 0 0 62
2012-12-19 12:59:41 ::1 POST /WebUI/Models/CME/Services/General.svc/GetListSystemAdministration - 80 BLAH\Administrator ::1 - 404 0 0 15
2012-12-19 12:59:41 ::1 POST /WebUI/Models/TCM54/Services/Lists.svc/GetList - 80 BLAH\Administrator ::1 - 404 0 0 30
2012-12-19 12:59:41 ::1 POST /WebUI/Models/TCM54/Services/Lists.svc/GetListEnumerationValues - 80 BLAH\Administrator ::1 - 404 0 0 5
2012-12-19 12:59:41 ::1 POST /WebUI/Models/TCM54/Services/Lists.svc/GetListEnumerationValues - 80 BLAH\Administrator ::1 - 404 0 0 8

So I could see from here that the errors were taking place when the CME web application made a local call-back on the server to it's own service layer. A bit more poking around showed that the problem was displayed whenever the CME made a callback to a service.

So what was going on? (Did I ask that already?)

It turned out that installing large portions of Sharepoint had had the undesired effect that the Tridion CME web site no longer owned the default binding. We had a host header binding mapped in IIS, and you could reach this just fine, but since the install, traffic aimed at 'localhost' was going to the wrong web site. Actually, Tridion has got this covered, because in the WebRoot Web.Config there's a an app setting called "Tridion.WCF.RedirectTo". This was pointing to localhost (which had worked fine when the server was first intalled). So when the CME tried to make calls back to the Model services, it was aiming these calls at localhost, which of course, ended up in the sharepoint site and a 404.

We fixed the immediate problem by editing the IIS bindings, but we're considering whether it might be good practice to always configure Tridion.WCF.RedirectTo to go to the name of your site, and not to localhost.

The relevant Tridion documentation is here,

Tridion Explorer reports System.ServiceModel.ServiceActivationException

Posted by Dominic Cronin at Dec 15, 2012 08:50 PM |

I'd been noticing strange messages popping up in the message centre of the SDL Tridion Explorer. The messages were about some service call failing with a 500 status and System.ServiceModel.ServiceActivationException, and seemed to be coming from various service points under C:\Program Files (x86)\Tridion\web\WebUI\Models\TCM54\Services. Here's an example:

/WebUI/Core/Services/Communicator.svc/Invoke failed to execute. STATUS(500): System.ServiceModel.ServiceActivationException

Not all the time, just occasionally when I did certain things. The thing that got me irritated enough to do something about it was when I wanted to delete a list of old versions of some items, and the multiple items functionality was breaking, and throwing up these messages. I could delete them one item at a time, but not all together. I suspect you can get problems with other things too, looking at the list of services that are served the same way from Models\TCM54\Services, and I think I remember also having problems with publishing and where-used.

A bit of Googling pointed me in the right direction, and I ended up after a couple of false starts editing: C:\Program Files (x86)\Tridion\web\WebUI\WebRoot\Web.Config

What you need to do to fix the problem is to add some configuration to get WCF to behave properly. On my, now working, system, it looks like this, but YMMV.
<serviceHostingEnvironment>
  <baseAddressPrefixFilters>
    <add prefix="http://localhost/"/>
  </baseAddressPrefixFilters>
</serviceHostingEnvironment>
Actually - once you're poking around in the web.config file, it's pretty easy, because it turns out that Tridion have already included the relevant configuration, commented out.
It may be that it's also called out in the installation documentation, and that I've missed it. Anyway - joining up the dots between the symptoms and this particular piece of config isn't so obvious, and it's always possible that you set up your system correctly and then add a new name binding in IIS, So therefore this "note to self" post, which will maybe help me to remember the extra step that's needed. And it can't hurt to have the cause and solution in close proximity in a Googleable location. :-)

Using helpers in Tridion Razor templating

Today, for the first time, I used a helper in a Razor Tridion template. I'd made a fairly standard 'generic link' embedded schema, so that I could combine the possibility of a component link and an external link in a link list, and allow for custom link text. (Nothing to see here, move along now please.)  However, when I came to template the output, I wanted to have a function that would process an individual link. A feature of Razor templating is that you can define a @helper, which is a bit like a function, except that instead of a return value, the body is an exemplar of the required output. There is also support for functions, so to lift Alex Klock's own example:

@functions {
    public string HelloWorld(string name) {
        return "Hello " + name;
    }
}

and

@helper HelloWorld(string name) {
    <div>Hello <em>@name</em>!</div>
}

will serve fairly similar purposes.

What I wanted to do today, however was slightly different; I didn't want to pass in a string, but a reference to my embedded field. All the examples on the web so far are about strings, and getting the types right proved interesting. I started out with some code like this:

@foreach(var link in @Fields.links){
  @RenderLink(link);
}

So I needed a helper called RenderLink (OK - this might be a very trivial use-case, but a real problem all the same.). But what was the type of the argument? In theory, "links" is an EmbeddedSchemaField (or to give it it's full Sunday name: Tridion.ContentManager.ContentManagement.Fields.EmbeddedSchemaField) but what you get in practice is an object of type "Tridion.Extensions.Mediators.Razor.Models.DynamicItemFields". I'd already guessed this by poking around in the Razor Mediator sources, but after a few of my first experiments went astray, I ended up confirming that with @link.GetType().FullName

Well I tried writing a helper like this:

@using Tridion.Extensions.Mediators.Razor.Models 
@helper RenderLink(DynamicItemFields link){
... implementation
}

but that didn't work, because when you try to call the methods on 'link' they don't exist.

And then, just for fun, of course, I tried

@using Tridion.ContentManager.ContentManagement.Fields 
@helper RenderLink(EmbeddedSchemaField link){
... implementation
}

but that was just going off in an even worse direction. Yeah, sure, that type would have had the methods, but what I actually had hold of was a DynamicItemFields. Eventually, I remembered some hints in the mediator's documentation and tried using the 'dynamic' keyword. This, it turns out, is what you need. The 'dynamic' type lets you invoke methods at run-time without the compiler needing to know about them. (At last, I was starting to understand some of the details of the mediator's implementation!)

@helper RenderLink(dynamic link){
... implementation
}

This may be obvious with hindsight (as the old engineers' joke has it ... for some value of 'obvious') . For now, I'm writing another blog post tagged #babysteps and #notetoself, and enjoying my tendency to take the road less travelled.

TWO roads diverged in a yellow wood,
And sorry I could not travel both
And be one traveler, long I stood

And looked down one as far as I could


To where it bent in the undergrowth;
Then took the other, as just as fair,
And having perhaps the better claim,

Because it was grassy and wanted wear;


Though as for that the passing there
Had worn them really about the same,
And both that morning equally lay

In leaves no step had trodden black.


Oh, I kept the first for another day!
Yet knowing how way leads on to way,
I doubted if I should ever come back.

I shall be telling this with a sigh


Somewhere ages and ages hence:
Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.

-- Robert Frost

Enabling XML syntax-highlighting for .config files in gVim

Posted by Dominic Cronin at Nov 23, 2012 10:15 PM |

I've used the vi text editor for many years; (at least long enough to know that it's pronounced vie and not vee-eye!). Over those years my level of expertise has varied somewhat - I'm fairly sure I've learned some commands and forgotten them several times over. Anyway - recently (i.e. in the last year or so), I've put some more effort in to reacquainting myself with some of its many joys. In practice, of course, I really mean vim: I'd be hard-pressed to remember the last time I saw vi in its "good-old-fashioned" form (does one say Plain-old-vi?) As most of my work is on Windows systems, this means using gVim.

Of the many improvements that vim has over vi, syntax highlighting is one of my favourites. The trouble is, one of my commonest use-cases for editing text files on Windows systems is .NET configuration files. Because these have a file extension of .config, they aren't recognised by default as XML files, and I end up going through the rigmarole of selecting one menu option to get a choice of file types added to the menus, and then locating XML among those newly added options to get highlighting to come on. Well there had to be a better way, and of course there was. What you have to do is this:

  • Locate your vi directory (on the system I was working on this evening, it's "C:\Program Files (x86)\Vim\"
  • Having found this directory, locate or create C:\Program Files (x86)\Vim\vimfiles\ftdetect
  • In ftdetect, create a file called config.vim with the following contents:
au BufRead,BufNewFile *.config     set filetype=xml

I have Windows configured to use vi as the default editor for .config files, so now with this in place, all I have to do is double-click on the file and it opens with XML syntax-highlighting enabled. Great stuff!