Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / weblog

Dominic Cronin's weblog

Showing blog entries tagged as: note to Self

Powershell 5 for tired old eyes

Posted by Dominic Cronin at Jan 02, 2016 04:55 PM |

With the release of Powershell 5, they introduced syntax highlighting. This is, in general, a nice improvement, but I wasn't totally happy with it, so I had to find out how to customise it. My problems were probably self-inflicted to some extent, as I think at some point I had tweaked the console colour settings. The Powershell is hosted in a standard Windows console, and the colours it uses are in fact the 16 colours available from the console. 

The console colours start out by default as fairly basic RGB combinations. You can see these if you open up the console properties (right-click on the title bar of a console window will get you there). In the powershell, these are given names - the powershell has its own enum for these, which maps pretty directly on to the ConsoleColor enumeration of the .NET framework. 

ConsoleColor

Description

Red 

Green Blue
Black

The color black.

0

0

0
Blue

The color blue.

0

0

255
Cyan

The color cyan (blue-green).

0

255

255
DarkBlue

The color dark blue.

0

0

128
DarkCyan

The color dark cyan (dark blue-green).

0

128

128
DarkGray

The color dark gray.

128

128

128
DarkGreen

The color dark green.

128

0

0
DarkMagenta

The color dark magenta (dark purplish-red).

128

0

128
DarkRed

The color dark red.

128

0

0
DarkYellow

The color dark yellow (ochre).

128

128

0
Gray

The color gray.

128

128

128
Green

The color green.

0

0

255
Magenta

The color magenta (purplish-red).

255

0

255
Red

The color red.

255

0

0
White

The color white.

255

255

255
Yellow

The color yellow.

255

255

0

In the properties dialog of the console these are displayed as a row of squares like this: 

and you can click on each colour and adjust the red-green-blue values. In addition to the "Properties" dialog, there is also an identical "Defaults" dialog, also available via a right-click on the title bar. Saving your tweaks in the Defaults dialog affects all future consoles, not only powershell consoles. 

In the Powershell, you can specify these colours by name. For example, the fourth one from the left is called DarkCyan. This is where it gets really weird. Even if you have changed the console colour to something else, it's still called DarkCyan. In the following screenshot, I have changed the fourth console colour to have the values for Magenta. 

Also of interest here is that the default syntax highlighting colour for a String, is DarkCyan, and of course, we also get Magenta in the syntax-highlighted Write-Host command. 

Actually - this is where I first had trouble. The next screenshot shows the situation after setting the colours back to the original defaults. You can also see that I am trying to change directory, and that the name of the directory is a String. 

My initial problem was that I had adjusted the Blue console color to have some green in it. This meant that a simple command such as CD left me with unreadable text with DarkCyan over a slightly green Blue background. This gave a particularly strange behaviour, because the tab-completion wraps the directory in quotes (making it a String token) when needed, and not otherwise. This means that as you tab through the directories, the directory name flips from DarkCyan to White and back again, depending on whether there's a space in it. Too weird...

But all is not lost - you also have control over the syntax highlighting colours. You can start with listing the current values using: 

Get-PSReadlineOption

And then set the colours for the various token types using Set-PSReadlineOption. I now have the following line in my profile

Set-PSReadlineOption -TokenKind String -ForegroundColor White

(If you use the default profile for this, you will be fine, but if you use one of the AllHosts profiles, then you need to check that your current host is a ConsoleHost.) 

Anyway - lessons learned... Be careful when tweaking the console colours - this was far less risky before syntax highlighting... and you can also fix the syntax highlighting colours if you need to, but you can only choose from the current console colours. 

Spoofing a MAC address in gentoo linux

I spent a few hours this weekend fiddling with networking things at home. One of the things I ran into was that the DHCP server provided by my ISP was behaving erratically. Specifically, it was being very fussy about giving out a new lease. It would give out a lease to a Windows 7 system I was using for testing, but not to my Gentoo server. At some point, having spent the day with this kind of frustration, I was ready to put up with almost any hack to get things running. Someone on the #gentoo IRC channel suggested that spoofing the MAC address that already had a lease might be a solution. Their solution was to do this: 

ifconfig eth0 down
ifconfig eth0 hw ether 08:07:99:66:12:01
ifconfig eth0 up

Here, you have to imagine that eth0 is the name of the interface, although on my system it isn't any more. (Another thing I learned this weekend was about predictable interface names.) You should also imagine that 08:07:99:66:12:01 is the mac address of the network interface on my Win7 system. 

The trouble with this is that it doesn't integrate very well in the standard init scripts that get things going on a Gentoo system. Network interfaces are started by running /etc/init.d/net.eth0 (although that's just a link to another script). The configuration is to be found in /etc/init.d/net where you can add directives that control the way your network interfaces are configured. The most important of these are the ones that begin with "config_". For example, to set up a static IP for eth0, you might say something like: 

config_eth0="192.168.0.99 netmask 255.255.255.0 brd 192.168.0.255"

or for DHCP it's much simpler: 

config_eth0="dhcp"

So my obvious first try for setting up a spoofed MAC address was something like this:

config_eth0="dhcp hw ether 08:07:99:66:12:01"

but this didn't work at all. Anyway - after a bit of fiddling and more Googling (sorry - I can't remember where I found this) it turned out that there's a specific directive just for this purpose. I tried this

mac_eth0="08:07:99:66:12:01"
config_eth0="dhcp"

It works a treat. Note that the order is important, which is obvious once you know it I suppose, but wasn't obvious to me until I'd got it wrong once. 

The good news after that was that for an established lease, everything worked rather better.

Moving your Tridion databases

Posted by Dominic Cronin at Oct 04, 2015 11:45 AM |

As part of setting up my new laptop, I installed MSSQL and obviously I wanted to have my existing Tridion databases available. My Tridion image had previously not had a database - I had that running natively on the old laptop, but I'd decided to go with a more conventional approach and run it in the image with Tridion. This transition had a couple of interesting moments, and hence this post. 

Moving the databases and getting MSSQL security working again. 

The moving part was fairly simple. I just detached all the databases, and copied the pairs of .MDF and .LDF files over to the new location and attached them. 

Once you've done this, you'll find that in each database, if you look under Security/Users, you'll find a User with a name that matches the login that you use in your Tridion configuration... for example: TcmDbUser. Unfortunately, this isn't enough. There are (at least) two kinds of User. The one you can see in your database (this is strictly a "database principal") can't be used for logging in. For that you need a "server principal", and these are to be found in your MSSQL instance under Security/Logins. For everything to work correctly, there needs to be a mapping between the database principal and the server principal. You can see this if you look in a correctly configured system. Right click on the login and open the properties, and open up the user mapping page. It should look something like this: 

So what we're aiming for is to have a matching Login and database User, with the same name. Creating a Login is easy enough, but if you try to add the mapping by hand in the User Mapping page, it will fail, because it wants to create a database user, and a database user with the same name already exists. (You could delete it, but then you'd have a world of pain trying to figure out all the properties and settings that the Tridion database scripts take care of automatically. I'm not even sure if support would ever talk to you again if you did this.) 

Fortunately, there's a better way. You can do it via SQL with various ALTER USER commands, but then you are going to be deeper into the security features of MSSQL than any normal person ought to wish for. (In this context, DBA's aren't normal, but then they won't be needing to read this blog post, will they?) However, you don't need to figure out all that SQL, because there's a system procedure (sp_change_users_login) that does exactly what you want. As long as your Login and User have the same name, you can just use the Auto_fix method, like this: 

Remembering the database settings you'd forgotten about. 

So I had all the MSSQL stuff correctly set up, or so I thought, but when I started to try to use the Tridion GUI, I kept getting error notifications in the Message centre.

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. 
Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider SQL: Network interfaces error, 26 - Error locating Server/Instance Specified)

This was pretty odd. I could see most of the GUI working fine, and publications were listed OK, but other lists weren't populated. I speculated that it might only be lists served via service calls that had problems, but when I checked the core service, it was able to list out my entire system. I spent quite some time fiddling with various settings and checking that named pipes etc., were configured correctly, before I eventually got smart enough to check T-REX again.In an old post from 2011, Rick Pannekoek suggested that a similar problem might be caused by the outbound email configuration. 

Sure enough - I'd forgotten that outbound email has it's own database configuration (if I'd ever known it - the installer sets it all up and mostly you never need to look there, unless you're actually doing outbound email). Anyway - I certainly hadn't realised that this would break the Content Manager's GUI. 

A quick visit to: 

C:\Program Files (x86)\Tridion\config\OutboundEmail.xml

and then a bit of fiddling with decrypting and re-encrypting (there are scripts for this that come with the installer), and I had my system in fully working order. 

 

 

Parameter type quirks of the XSLT mediator

Posted by Dominic Cronin at Sep 27, 2015 11:45 AM |

Today I was working on a template with an XSLT building block. I'd added a parameter to the package further up, and expected to use it simply by having an <xsl:param/> element with a matching name. Instead I got the error message you can see in the screencap below... Value cannot be null, Parameter name: parameter. 

So what's going on here? Well I had a bit of a dig... (obviously by using my secret powers, and nothing as humdrum as technology) and came up with a couple of interesting things. Firstly, the way I'd imagined things was all wrong. I had assumed that the mediator would loop through the package variables, and add them as parameters to the XSLT. In fact, it's the other way round. The mediator parses the XSLT to get the param elements that are declared, and loops through these to see if it can find a satisfactory parameter to add.

If you look in the documentation, you will find that there are some "magic" parameter names that will cause the mediator to pass various relevant data items as parameters. These are tcm:Publication, tcm:ResolvedItem, tcm:ResolvedTemplate and tcm:XsltTemplate. In addition to these documented parameters, tcm:Page and tcm:ComponentTemplate would also appear to work under the correct circumstances, but of course, if you want your templates to be future-proof, it's better not to use such undocumented features, especially seeing as you could just add the relevant items as XML to your package, and have the same result. It all reminds me of the old XSLT component templates, that also had magic parameters that very few people knew about.

Anyway, back to my bug - for it is indeed a bug. In addition to providing magic parameters, of course the mediator also wires up parameters that are in the package. So - having found a parameter name in the XSLT, it looks for a package item with the matching name. If the item is of type "text" or "html", then it gets added as a string. For any other item type, it tries to get an appropriate XmlDocument and add that. If this process fails, any exceptions get swallowed, and instead of an XmlDocument the "parameter" parameter of AddParam becomes null. And then we see the aforementioned "Value cannot be null. Parameter name: parameter" message, which is the .NET framework quite correctly checking its input values and refusing to play.

The solution is easy - instead of using ContentType.String when I added my parameter to the package, I used ContentType.Text, and everything worked like a charm. But not obvious, and hence the blog post. I'm sure to forget this, and having it in my "external memory" might help.

It's easy to see how this could happen. In fact, it's our old friend LOLA. The GetAsXmlDocument() method of a Templating Item returns a null if it can't manage to return the relevant XmlDocument  - for all I know, this is the correct semantics for such a method. Maybe there are very good reasons for it. Still - if you're writing client code, and you don't know this, you'll fail to do the null check, and things will break. FWIW the null check is also missing in older versions of the mediator.

So - there - I've got that off my chest. I should probably report this to customer support. But it's the weekend, and seeing as my stuff works, and the answer is now google-able, I might possibly not have that much energy :-)

Vim Windows weirdnesses

Posted by Dominic Cronin at Dec 22, 2014 09:43 PM |
Filed under: , ,

This is just a quick note-to-self to remind me of the stuff I always forget when installing plugins and the like for Vim on a Windows machine. So of course this means gVim. The confusing thing is always that the documentation for everything refers to your ~/.vim directory. And - you haven't got one. Here's the note to self.

Your ~/.vim directory is called vimfiles

And ~ is probably somewhere like C:\Users\dominic - your .vimrc will be there too, so you can find it by running vim and doing

:echo $MYVIMRC

What not to do when upgrading to Grub2

Posted by Dominic Cronin at Feb 08, 2014 04:10 PM |

I'd been following the Gentoo Wiki guidance on upgrading Grub, and had been taking it very carefully. I'd worried about getting this right, as getting it wrong would leave me with a brick, so I'd been very pleased to see the notes on using the old bootloader to chain load the new one. That way I could check that my configuration was correct before taking the plunge of installing the new version into the Master Boot Record. I didn't want to automatically generate the new config file, as I didn't trust it. (Rightly so as it turned out, because my initrd files didn't follow the strict naming requirements, so weren't picked up by the config generation script) Anyway - the hand-written config was half a dozen lines long, and the generated one was utterly incomprehensible.

So anyway - I managed to create the config file, and get everything set up for chain loading. I rebooted the server, and bingo - there was the chain loader entry in my "old" boot screen, and when I followed it, I got the new menu and could boot the server. Great stuff! Now it should have been a simple question of running grub2-install, and I'd be finished. So I did this, and then.... the computer wouldn't start. Fortunately I had a grub prompt, so grub was "working" - but it obviously couldn't find its config file. I already knew that with the right incantations it might be possible to get the thing to boot without a config file, and after a bit of googling, I got enough clues to attempt it. (For the record, what I think I'd done wrong was to fail to remount /boot after my chain test and before running grub2-install, with the result that grub then didn't know how to correctly find /boot.)

It took a few attempts, but the command line completion in grub helps a lot. This is what I eventually ended up typing at the grub prompt to get a working boot.

grub > set root=(hd0,1)
grub > linux /kernel-gen-newudev-3.3.8-gentoo root=/dev/sda3
grub > initrd /initramfs-gen-newudev-3.3.8-gentoo
grub > boot

Note that the root for the boot loader is different from the root of the operating system, so you have to specify them separately. Obviously YMMV for the names of the kernel and initrd files, not to mention device identifiers.

But the real advice here is to avoid missing out that crucial mount operation!!

Gentoo emerge dies with 'failed to open /dev/urandom' when wrong default python is configured.

Posted by Dominic Cronin at Jan 22, 2014 12:12 AM |

So there I was - just for fun building my new Gentoo system, when all of a sudden, I wasn't. Building, that is. I wasn't building anything. In fact, part of the motivation for a clean build had been that emerging new things was getting tiresomely fragile. Anyway - here's what happened when I tried an emerge. The interesting part is where it says: Fatal Python error: Failed to open /dev/urandom

>>> Emerging (1 of 18) sys-libs/glibc-2.17
 * Fetching files in the background. To view fetch progress, run
 * `tail -f /var/log/emerge-fetch.log` in another terminal.
 * glibc-2.17.tar.xz SHA256 SHA512 WHIRLPOOL size ;-) ...                                                                            [ ok ]
 * glibc-2.17-patches-8.tar.bz2 SHA256 SHA512 WHIRLPOOL size ;-) ...                                                                 [ ok ]
make -j2 -s glibc-test
make -j2 -s glibc-test
>>> Unpacking source...
 * Checking gcc for __thread support ...                                                                                             [ ok ]
 * Checking kernel version (3.3.8 >= 2.6.16) ...                                                                                     [ ok ]
 * Checking linux-headers version (3.9.0 >= 2.6.16) ...                                                                              [ ok ]
>>> Unpacking glibc-2.17.tar.xz to /var/tmp/portage/sys-libs/glibc-2.17/work
>>> Unpacking glibc-2.17-patches-8.tar.bz2 to /var/tmp/portage/sys-libs/glibc-2.17/work
 * Applying Gentoo Glibc Patchset 2.17-8 ...
 *   0035_all_glibc-2.16-i386-math-feraiseexcept-overhead.patch ...                                                                  [ ok ]
 *   0059_all_glibc-2.19-make-4.0.patch ...                                                                                          [ ok ]
 *   0065_all_glibc-2.18-qecvt-guards.patch ...                                                                                      [ ok ]
 *   0070_all_glibc-2.18-localedef-page-align-1.patch ...                                                                            [ ok ]
 *   0071_all_glibc-2.18-localedef-page-align-2.patch ...                                                                            [ ok ]
 *   0072_all_glibc-2.18-localedef-page-align-3.patch ...                                                                            [ ok ]
 *   0085_all_glibc-disable-ldconfig.patch ...                                                                                       [ ok ]
 *   0090_all_glibc-2.17-arm-ldso.cache.patch ...                                                                                    [ ok ]
 *   1005_all_glibc-sigaction.patch ...                                                                                              [ ok ]
 *   1008_all_glibc-2.16-fortify.patch ...                                                                                           [ ok ]
 *   1040_all_2.3.3-localedef-fix-trampoline.patch ...                                                                               [ ok ]
 *   1055_all_glibc-resolv-dynamic.patch ...                                                                                         [ ok ]
 *   1505_all_glibc-nptl-stack-grows-up.patch ...                                                                                    [ ok ]
 *   1506_all_glibc-2.17-hppa-fpu.patch ...                                                                                          [ ok ]
 *   1507_all_glibc-2.17-hppa-ldso-flag.patch ...                                                                                    [ ok ]
 *   1507_all_hppa-ia64-DL_AUTO_FUNCTION_ADDRESS.patch ...                                                                           [ ok ]
 *   1508_all_glibc-2.17-hppa-futex.patch ...                                                                                        [ ok ]
 *   1508_all_hppa-fanotify_mark.patch ...                                                                                           [ ok ]
 *   3020_all_glibc-tests-sandbox-libdl-paths.patch ...                                                                              [ ok ]
 *   5063_all_glibc-dont-build-timezone.patch ...                                                                                    [ ok ]
 *   6024_all_alpha-fix-signal-thunk-unwind-info.patch ...                                                                           [ ok ]
 *   6230_all_arm-glibc-hardened.patch ...                                                                                           [ ok ]
 * Done with patching
 * Using GNU config files from /usr/share/gnuconfig
 *   Updating scripts/config.sub                                                                                                     [ ok ]
 *   Updating scripts/config.guess                                                                                                   [ ok ]
>>> Source unpacked in /var/tmp/portage/sys-libs/glibc-2.17/work
Fatal Python error: Failed to open /dev/urandom
/usr/lib/portage/bin/phase-functions.sh: line 87:  4204 Aborted                 "${PORTAGE_PYTHON:-/usr/bin/python}" "${PORTAGE_BIN_PATH}"/ilter-bash-environment.py "${filtered_vars}"
 * ERROR: sys-libs/glibc-2.17::gentoo failed (unpack phase):
 *   filter-bash-environment.py failed
 *
 * Call stack:
 *            ebuild.sh, line 714:  Called __ebuild_main 'unpack'
 *   phase-functions.sh, line 993:  Called __filter_readonly_variables '--filter-features'
 *   phase-functions.sh, line 137:  Called die
 * The specific snippet of code:
 *      "${PORTAGE_PYTHON:-/usr/bin/python}" "${PORTAGE_BIN_PATH}"/filter-bash-environment.py "${filtered_vars}" || die "filter-bash-enviroment.py failed"

So what was going on here? Well as it turned out, my system has three versions of python loaded, and Gentoo's portage system (of which emerge is part) seems to rely on you not using python 3. After a short bit of fiddling with "eselect python list" and "eselect python set", to get the default python back to 2.7, the build ran like a charm.

So anyway - this has got to count as the most bizarrely mis-reported error in my most recent years. "/dev/urandom" was working fine. I could start it and stop it ("/etc/init.d/urandom stop" and so forth) and I could use it to access randomness. Why then did I get the "failed to open" message with one version of python, and not with another. Answers on a postcard? Whatever - this was a public service announcement.

Getting IIS Express to run in a 64 bit process, and other fun Tridion content delivery configurations

Posted by Dominic Cronin at Jul 24, 2013 07:55 PM |

In the last couple of days, I've spent far more time than I'd like figuring out how to get a Tridion-based web application to run correctly under Visual Studio. There are three basic choices:

  1. Run it directly using Visual Studio
  2. Run it using IIS Express
  3. Run it using IIS (non-Express version)

As the application is intended to run on a 64 bit architecture, there are some challenges. Visual Studio runs in 32 bit mode, so the first option is out. Using full-on IIS is an attractive thought; you can manually configure the application pool to run in 64 bit mode. Unfortunately, getting a debug session up and running takes more configuration than that. You have to set up the web site correctly, and it was just too fiddly. I ran out of time, or steam or whatever. (Somebody will probably tell me it's easy, and I dare say it is when you know how, and aren't spending time you really should be spending on something else. Any hints are always welcome.)

Of course, with a Tridion site, half the game is making sure you have the correct DLLs in place for the processor architecture you are using. Along the way, I discovered that the quick and dirty way to tell if you have a 32 or 64 bit version of xmogrt.dll (Juggernet's "native" layer) is the size. The 64 bit version comes in at 1600KB and the 32 bit version is about half that at 800KB or so. This varies from version to version, so on a 2013 system, it's 1200ish/900ish KB, but once you get the hang of it, you can tell them apart at sight, which is pretty useful.  The other DLLs are also important, although as far as I can tell, only Tridion.ContentDelivery.AmbientData.dll is hard-compiled for 64 bit architecture, at least on the 2011 system I was working on. The rest of the .NET assemblies are compiled to MSIL, which of course, will run on either architecture.

But I digress. The thing I wanted to blog (and this will definitely be tagged note-to-self) was how to get IIS Express to run in 64 bit mode. By default it runs on 32 bits, but if you follow this link:

http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/3254745-allow-for-iis-express-64-bit-to-run-from-visual-st

... you will find the following nugget of goodness:

You can configure Visual Studio 2012 to use IIS Express 64-bit by setting the following registry key:

reg add HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\WebProjects /v Use64BitIISExpress /t REG_DWORD /d 1

However, this feature is not supported and has not been fully tested by Microsoft. Improved support for IIS Express 64-bit is under consideration for the next release of Visual Studio.

Very handy indeed. Running under IIS express is just one click of the button. Just works.

And by way of a PS. (Post Script that is, not PowerShell) here's how you find the processor architecture of a DLL (This time on my 2013 image.)

PS C:\inetpub\www.visitorsweb.local\bin> [reflection.assemblyname]::GetAssemblyName((resolve-path '.\Tridion.ContentDelivery.AmbientData.dll')).ProcessorArchitecture
MSIL

Well anyway - it's no fun scratching your head over stuff like this. Maybe this helps.

Pacman

Posted by Dominic Cronin at Apr 17, 2013 09:55 PM |

This is mostly by way of a "note to self". I've recently started working at a customer where connecting my computer to their network is not just allowed, but necessary. Once connected, if I want to use the Internet, I have to go through their filtering proxy - presumably to keep the badness of the Internet from their systems (and yes, they do pay a lot of attention to ensuring the machine is virus-free). Previously, when I worked there for a day or two, setting up the proxy was a minor irritation, but as I'm going to be there rather longer, the idea of reconfiguring my networking twice a day started to look pretty unattractive. My first attempt at solving this had been to have a couple of scripts that set up the proxy by making the relevant registry settings, but unfortunately, Windows doesn't pick these up immediately. Yeah - sure - if I could remember to run the scripts before shutting down it might work, but I'm not that obsessive.Or I could get Windows to pick up the settings by opening the various screens... Internet Options... Connections.... LAN Settings... oh wait... there had to be a better way.

It turns out that there's something called a Proxy Auto-configuration file. If you select "Automatically detect settings", then Windows will try and locate one of these on the network using the Web Proxy Auto-discovery Protocol, however the customer in question doesn't do this. My needs were simple enough, though, so I checked the next box down: the one that says "Use automatic configuration script". All that remained was to create the script.

It turns out that you write such things in JavaScript, and it's simply a matter of writing a function which is named in the PAC standard, and using other functions that are made available. Here's what I ended up with (although I'll probably add refinements):

function FindProxyForURL(url, host) {
	var customerProxy = "PROXY 10.62.40.42:1234";

	if (atCustomer()){
		if(dnsDomainIs(host, ".internal.customer.com") || dnsDomainIs(host, "localhost")|| dnsDomainIs(host,".local")){
			return "DIRECT";
		}
		else return customerProxy;
	} else {
		return "DIRECT";
	}
	
}

function atCustomer(){
	return isResolvable("server.not.on.external.dns");
	// or maybe
	// return isInNet(myIpAddress(), "10.62.0.0", "255.255.0.0"); 
}

Nothing fancy, but it works. I suspect I'll find a few edge cases where I maybe have to enhance the script or even configure things by hand, but for now I have the satisfaction of knowing I can just turn up, plug in, and start work.

Mysterious 404 errors showing up in the Tridion message centre

Posted by Dominic Cronin at Dec 19, 2012 11:37 PM |

Today I spent some time setting up a Tridion 2011 Content Manager server. In fact, the content manager had already been installed and had been working fine. Then we'd installed Microsoft Search Server. OK - so it's quite unusual to be doing quite so much all on one server, but this is a customer with minimal needs. Not everyone has 200 servers in the rack! Although Search Server is packaged as a product in it's own right, it's built on Sharepoint, and when you install it, it seems to bring two thirds of Sharepoint with it, including 2 MSSQL instances and three web sites. So to get the benefit of Microsoft's "free" search services, we'll probably have to configure another couple of gigs of RAM. (SFX: Sound of a cash register going "ca-ching" at VMWare headquarters)

Anyway to be fair, the search solution looks pretty good and it definitely does what it says on the box, although it's got about a hundred configuration screens (I haven't actually counted them, though). Well anyway - we'd installed this beast on our previously working Tridion server, and most things were going OK. Until I did an IISRESET, and then suddenly the Tridion CME started to complain about a 404 problem. So when you started the CME, you'd get error messages like:

The remote server returned an error (404) not found

On examining the message centre, I found this message 6 times, along with "Loading list of languages failed" and "Loading list of locales failed". Sure enough, the relevant drop-downs in the User preferences are not  populated.

When I F12'd the browser. (Is there a verb, to F12? There should be.) I could see that the browser wasn't seeing any responses with HTTP status 404. So what was going on?

After digging a bit on the server, I found that there were entries in the web server log like this:

2012-12-19 12:59:41 ::1 POST /WebUI/Models/CME/Services/General.svc/GetListCustomPages - 80 BLAH\Administrator ::1 - 404 0 0 58
2012-12-19 12:59:41 ::1 POST /WebUI/Models/CME/Services/General.svc/GetListFavorites - 80 BLAH\Administrator ::1 - 404 0 0 62
2012-12-19 12:59:41 ::1 POST /WebUI/Models/CME/Services/General.svc/GetListSystemAdministration - 80 BLAH\Administrator ::1 - 404 0 0 15
2012-12-19 12:59:41 ::1 POST /WebUI/Models/TCM54/Services/Lists.svc/GetList - 80 BLAH\Administrator ::1 - 404 0 0 30
2012-12-19 12:59:41 ::1 POST /WebUI/Models/TCM54/Services/Lists.svc/GetListEnumerationValues - 80 BLAH\Administrator ::1 - 404 0 0 5
2012-12-19 12:59:41 ::1 POST /WebUI/Models/TCM54/Services/Lists.svc/GetListEnumerationValues - 80 BLAH\Administrator ::1 - 404 0 0 8

So I could see from here that the errors were taking place when the CME web application made a local call-back on the server to it's own service layer. A bit more poking around showed that the problem was displayed whenever the CME made a callback to a service.

So what was going on? (Did I ask that already?)

It turned out that installing large portions of Sharepoint had had the undesired effect that the Tridion CME web site no longer owned the default binding. We had a host header binding mapped in IIS, and you could reach this just fine, but since the install, traffic aimed at 'localhost' was going to the wrong web site. Actually, Tridion has got this covered, because in the WebRoot Web.Config there's a an app setting called "Tridion.WCF.RedirectTo". This was pointing to localhost (which had worked fine when the server was first intalled). So when the CME tried to make calls back to the Model services, it was aiming these calls at localhost, which of course, ended up in the sharepoint site and a 404.

We fixed the immediate problem by editing the IIS bindings, but we're considering whether it might be good practice to always configure Tridion.WCF.RedirectTo to go to the name of your site, and not to localhost.

The relevant Tridion documentation is here,