Hyper-V, KEMP LoadMaster & DFS Replication Provide FTP Solutions For Surveyors Network


Remember the blog entry about A Hardware Load Balancing Exercise With A Kemp Loadmaster 2200 KEMP Loadmaster to provide redundancy for a surveyors GPS network? Well, we got commissioned to come up with a redundant FTP solution for their needs last month and this blog is about what we came up with. The aim was to make due with what is already available.

image

FTP 7.5 in Windows 2008 R2

We use the FTP Server available in Windows 2008 R2 which provides us with all functionality we need: User Isolation and FTP over SSL.

The data from all the GPS stations is sent to the FTP server for safe keeping and is to be used to overcome certain issues customers might have with missing data from surveying solution. This data is not being made available to customers by default, it’s only for special cases & purposes. So we need to collect the data in its own folder named after its account so we can configure user isolation. This also prevents the GPS Stations from writing in locations where it shouldn’t.

As every GPS Station slogs in with the “Station” account it ends up in the “Station” folder as root FTP folder and can’t read or write out of that folder. The survey solution service desk can FTP into that folder and access any data they want.

The data that’s being provided by the software solution (LanSurvey01 and lanSurvey02) is to be sent to its own folder “Data” that is also set up with user Isolation to prevent the application from reading or writing anywhere else on the file system.

The data from should be publicly available to the customers and for this we created a separate FTP site called “Public” that is configured for anonymous access to the same Data folder but with read permissions only. This way the customers can get all the data they need but only have read access to the required data and nothing more.

For more information on setting up FTP 7.5 and using FTP over SSL you might take a look here http://learn.iis.net/page.aspx/304/using-ftp-over-ssl/ and read my blog on FTP over SSL Pollution of the Gene Pool a Real Life “FTP over SSL” Story

High Availability

In the section above we’ve taken care of the FTP needs. Now we still need redundancy. We could use Windows NLB but since this network already uses a KEMP Loadmaster due to the fact that the surveyor’s software has some limitations in its configuration capabilities that doesn’t allow Windows Network Load Balancing being used.

We want both the GPS stations and the surveyor’s application servers to be able to send FTP data when one of the receiving FTF servers is down for some reason (updates, upgrades, maintenance or failure). What we did is set up a VIP for use with FTP on the Kemp Loadmaster. This VIP is what is used by the GPS Stations and the application to write and by the customers to read the FTP data.

DFS-R to complete the solution

But up until now we’ve been ignoring an issue. When using NLB to push data to hosts we need to ensure that all the data will be available on all the nodes all of the time. You could opt to only have the users access the FTP service via a NLB VIP address and push the data to both nodes without using NLB. The latter might be done at the source but then you have twice the amount of data to push out. It also means extra work to configure and maintain the solution. We could copy the data to one FTP node and copy it from there. That works but leaves you very vulnerable to a service outage when the node that gets the original copy is down. No new data will be available. Another issue is the fact that you need a rock solid way to copy the data and have it done it a timely manner, even after down time of one or more of the nodes.

As you read above we provide a NLB VIP as a target for the surveyor’s application and the GPS Stations to send their data to. This means the data will be sent to the FTP NLB array even if one of the nodes is down for some reason. To get the data that arrives from 2 application servers and from 40 GPS Stations synchronized and up to date on both the NLB nodes we use the Data File System – Replication (DFS-R) built into Windows 2008 R2. We have no need for a DFS-Namespace here, so we only use the replication feature. This is easy and fast to set up (add the DFS service from the File Server Role) and it doesn’t require any service down time (no reboot required). The fact that both the FTP nodes are member of a Windows 2008 R2 domain does help with making this easy. To make sure we have replication in all direction we opt to set it up as a full and the replication schedule is 24/7, no days off J Since we chose to replicate the FTP root folder we have both the Data and the Stations folders covered as well as the folder structure needed to have FTP user Isolation function.

This solution was built fast and easily using Windows 2008 R2 out of the box functionality: FTP(S) with User Isolation and DFS-R. The servers are running as hyper-V guests in a Hyper-V cluster providing high availability through Live Migration.

Move that DFS Namespace to Windows 2008 Mode


As promised a while back (Busy, busy, busy) here’s a quick heads up for all you people out there who are still running there DFS namespaces in Windows 2000 mode. Well when your sporting Windows 2008 (R2) you should get those namespaces moved to Windows 2008 mode sooner or later anyway. For some reason there is no GUI or PowerShell command let to do this. It’s pretty amazing how many of those of you  that can change to the Windows 2008 mode didn’t do it yet. Perhaps the somewhat involved manual process has something to do with this? But OK, if you still see this when you look at the properties of your dfs namespace …

image

… then perhaps it’s time to go visit TechNet you’ll find some info to do it semi-automated as they call it: Migrate a Domain-based Namespace to Windows Server 2008 Mode

Here’s a recap of the steps to take:

1. Open a Command Prompt window and type the following command to export the namespace information to a file, where \\domain\namespace is the name of the appropriate domain and namespace and path\filename is the path and file name of the export file: Dfsutil root export \\domain\namespace path\filename.xml

2. Write down the path (\\server\share) for each namespace server. You must manually add namespace servers to the recreated namespace because Dfsutil cannot import namespace servers.

3. In DFS Management, right-click the namespace and then click Delete, or type the following command at a command prompt, where \\domain\namespace is the name of the appropriate domain and namespace: Dfsutil root remove \\domain\namespace

4. In DFS Management, recreate the namespace with the same name, but use the Windows Server 2008 mode, or type the following command at a command prompt, where \\server\namespace is the name of the appropriate server and share for the namespace root: Dfsutil root adddom \\server\namespace v2

5. To import the namespace information from the export file, type the following command at a command prompt, where \\domain\namespace is the name of the appropriate domain and namespace and path\filename is the path and file name of the file to import: Dfsutil root import merge path\filename.xml \\domain\namespace  In order to minimize the time that is required to import a large namespace, run the Dfsutil root import command locally on a namespace server.

6. Add any remaining namespace servers to the recreated namespace by right-clicking the namespace in DFS Management and then clicking Add Namespace Server, or by typing the following command at a command prompt, where \\server\share is the name of the appropriate server and share for the namespace root: Dfsutil target add \\server\share

Be sure to read the community comments as the ampersand issue might affect you. As always it’s good to do some research on possible updates affecting the technology at hand so that’s what I did. And look here what “Binging” for “DFS windows 2008 R2 updates” produced: List of currently available hotfixes for Distributed File System (DFS) technologies in Windows Server 2008 and in Windows Server 2008 R2. We might as well put them into action and be on the safe side.

If you’re not using DFS-N or DFS-R yet give it a look. No it’s not the perfect solution for every scenario, but for name space abstraction (server replacements, moving data, server renaming, …) and keeping data available during maintenance or at the right place (branch offices for example) it’s nice tool to have.  As another example ,  I just recently used DFS-R in full mesh to synch the nodes in an NLB FTP solution where about 45 devices put data on the FTP servers (Windows 2008 R2) via the VIP so they have resilient FTP service.

A VDI Reality Check @ BriForum 2011 For Resource Hungry Desktops In A Demanding Environment


So what did we notice? VDI generates enough interest from various angles that is for sure. Both on the demand side as on the (re)seller & integrator side. Most storage vendors are bullish enough to claim that they can handle whatever IOPS required to get the most bang for the buck but only the smaller or newest players where present and engaged in interaction with the attendees. One thing is for sure VDI has some serious potential but it has to be prepared well and implemented thoroughly. Don’t do it over the weekend and see if it works out for all your users Smile

The amount of tools & tactics for VDI on both the storage side and the configuration/management side is both more complex and diverse than with server virtualization.  The possible variations on how to tackle a VDI project are almost automatically more numerous as well. This is due to the fact that desktops are often a lot more complex and heterogenic in nature than server side apps. On top of that the IO on a desktop can be quite high. Some of it can be blamed on the client OS but lots of that has to do with the applications and utilities used on desktops.  I think that developers had so many resources at their disposal that there wasn’t to much pressure on optimization there. The age of multi cores and x64 bit will  help in thinking more about how and application uses CPY cycles but virtualization might very well help in abstracting that away. When a PC has one vCPU and the host has 4*8 cores, how good is that hyper visor at using all that pCPU power to address the needs of that one vCPU?  But I digress. All in all it takes more effort and complexity to do VDI than server virtualization. So there is a higher cost or at least the APEX isn’t such a convincing clear cut story as it is with server virtualization. If you’re not doing the latter today when and where you can you are missing out of a major number of benefits that are just to good to ignore. I wouldn’t dare say that for VDI. Threating VDI just like server virtualization is said to be one of the main reasons of VDI failing or being put on hold or being limited to a smaller segment of the desktop population.

My experience with server virtualization is also with rather heterogenic environments where we have VMs with anything between 1 and 4 virtual CPUs, 2 to 12 GB of RAM. And yet I have to admit it has been a great success. Never the less I can’t say that helped me much in my confidence that a large part of our desktop environment can be virtualized successfully and cost effectively as I think that our desktops are such vicious resource hogs they need another step forward in raw power and functionality versus cost. Let briefly describe the environment. 85% of the workforce at my current gig have dual 24” wide screens, with anything between 4GB to 8 GB of RAN,Quad Core CPUs and SCSI / SATA 10.000 RPM disks with anything between 250 GB to 1TB local storage in combination with very decent GPUs. Now the employees run Visual Studio, SQL Server, multiple CAD & GIS packages and various specialized image processing software that gauges image and other files that can be 2GB or even higher. If they aren’t that large than they are still very numerous. On top of that 1Gbps network to the desktop is the only thing we offer anymore. So this is not a common office suite plus a couple of LOB applications order, this is a large and rich menu for a very hard to please audience. That means that if you ask them what they want, they only answer more, more, more … And I won’t even mention 3D screens & goggles.

Now I know that X amount of time the machines are idle or doing a lot less but in the end that’s just a very nice statistic. When a couple of dozen users start playing around with those tools and throw that data around you still need them and their colleagues to be happy customers. Frankly even with the physical hardware that hey have now that can be a challenge. And please don’t start about better, less resource wasting applications and such. You can’t just f* the business and tell them to get or wait for better apps. That flies in the face of reality. You have to be able to deliver the power where and when needed with the software they use. You just can’t control the entire universe.

I heard about integrators achieving 40-60 VMs per host in a VDI project. Some customers can make due with Windows 7 and 1GB of RAM. I’m not one of those. I think the guys & gals of the service desk would need armed escorts if we rolled that out to the employees they care for. One of the things I notice is that a lot of people choose to implement storage just for VDI. I’m not surprised. But until now I’ve not needed to do it. Not even for databases and other resource hogs. Separate clusters, yes, as the pCPU/vCPU ratio and Memory requirements differ a lot from the other servers. The fact that the separate cluster uses other HBA’s en LUNS also helps.

Next to SANs local storage for VDI is another option for both performance and cost. But for recovery this isn’t quite that good a solution. The idea of having non persistent disks (in a pool) or a combination of that with persistent disk is not something I can see fly with our users. And frankly a show of hands at BriForum seems to indicate that this isn’t very wide spread. VDI takes really high performance storage, isolated from your server virtualization to make it a success. On top of that if you need control, rapid provisioning, user virtualization &  workspace management in a layered/abstracted way. Lost of interest there but again, yet more tools to get it done. Then there is also application virtualization, terminal service based solutions etc. So we get a more involved, divers and expensive solution compared to server virtualization. Now to offset these costs we need to look at what we can gain. So where do the benefits to be found?

With non persistent disk you have a rapid provisioning of know good machines in a pool but your environment must accept this and I don’ see this flying well in face of the reality of consumerization of ICT. De-duplication and thin provisioning help to get the storage needs under control but the bigger the client side storage needs and the more diverse these are the less gains can be found there. Better control, provisioning, resource sharing, manageability, disaster recovery, it is all possible but it is all so very specific to the environment compared to server virtualization and some solutions contradict gains that might have been secured with other approaches (disaster recovery, business continuity with SAN versus local storage). One of the most interesting possibilities for the environment I described was perhaps doing virtualization on the client. I look at it as booting from VHD in the Windows 7 era but on steroids. If you can save guard the images/disks on a SAN  with de-duplication & thin provisioning you can have high availability & business continuity as loosing the desktops is a matter of pushing to VM to other hardware which due to abstraction by virtualization should be a problem. It also deals with the network issues of VDI, a hidden bottle neck as most people focus on the storage. Truth be told, the bandwidth we consume is that big, it could be that VDI might have it best improvements for us on that front.

Somewhat surprising was that Microsoft, whilst being really present at PubForum in Dublin, was nowhere to be seen at BriForum. Citrix was saving it’s best for their own conference (Synergy) I think. To bad, I mean when talking about VDI in 2011 we’re talking about Windows 7 for the absolute majority of implementations and Citrix has a strong position in VDI really giving VMware a un for their money. Why miss the opportunity? And yesterday at TechEd USA we heard about the HSBC story of a 100.000 seat VDI solution on Hyper-V http://www.microsoft.com/Presspass/press/2011/may11/05-16TechEd11PR.mspx.

On a side note I wish I would/could have gone to PubForum as well. Should have done that Smile. Now these musings are based upon what I see at my current place of endeavor. VDI has a time and place where it can provide significant operational and usage advantages to make the business case for VDI. Today, I’m not convinced this is the case for our needs at this moment in time. looking at our refresh schedule we’ll probably pass on a VDI solution for the coming one. But booting from VHD as  a standard in the future… I’m going to look into that, it will be a step towards the future I think.

To conclude BriForum 2011 was a good experience and the smaller scale of it makes for good and plenty of opportunities for interaction and discussion. A very positive note is that most vendors & companies present where discussing real issues we all face. So it was more than just sales demos. Brian, nice job.

A Brighter Future For Public Folders?


The Exchange Team posted a blog entry asking for feedback on how we use public folders. Nice to see they are taking an interest again. The past 4 years the mantra was “move away from them”, “do it now while you still have the time”, etc. SharePoint was always put forwards as number one replacement option. For some scenarios this is indeed a good choice but let’s face it, for some public folder uses there is no decent replacement and that hurts us as they haven’t seen any decent improvements in the last 2 Exchange releases. I know public folders have always been a bit problematic and finicky for us administrators. They tend to need a bit of voodoo and patience to trouble shoot and get running smoothly (see  blog post by me for an example of this). But instead of using that of an excuse to get rid of them they could also choose to invest in making them as reliable and robust as mail databases. Giving them the same high availability features might also be a welcome improvement, especially now with DAGs in Exchange 2010.

Especially in the Exchange 2007 era Microsoft was promoting getting rid of them actively. But they are still around because so many people use them and they have not decent alternative for all scenarios. In that respect they do listen to their customers. But we want improvements. Some of the functionality we need is there but we really need more robust, reliable and high available public folders. As as shared mail instrument for both sending and receiving mail in a team public folders beat shared mailboxes and SharePoint any time.  It also shines for maintaining a shared repository of contacts. I’m not a proponent of using public folder for a document repository but I understand that its relative simple usage and data protection via replicas still sounds attractive to some versus the complexity of SharePoint. Sure SharePoint has more to offer but perhaps they don’t need those capabilities and to make matters even less attractive; it’s quite an effort to migrate from public folders to SharePoint.

So that left us public folders users feeling a bit abandoned with a message of get out but no easy path to go anywhere else that serves all our needs. So until today all my customers are still and want to  keep using public folders. They are a worried however that one day they will be left out in the cold. But perhaps there is a better future on the horizon for public folders.  They are asking us to “Help us learn more about how you use public folders today!” in that blog post. The emphasis is on “usage scenarios, folder management habits or thought process around public folder data organization”. So if you need and use public folders in any way and you’d like for them to get more attention and evolve into more robust and functional instruments give Microsoft your feedback. Exchange 2010 has brought us great features & very affordable high availability together with support for virtualization. Now we either need a better alternative to public folders than the ones we got now or (my preference) we need better public folders. Since consumption of public folders occurs mostly in Outlook I would suggest the latter. And while we’re asking, bring back access to folder shares in OWA Winking smile.

Exchange 2010 SP1 DAG & Unified Messaging Now Supports Host Based High Availability & Live Migration!


Well due to rather nice virtualization support for Lync and the fact that Denali (SQL Server vNext) does support DAG like functionality with Live Migration and host based clustering, it was about time for Exchange 2010 to catch up. And when we read the white paper  Best Practices for Virtualizing Exchange Server 2010 with Windows Server® 2008 R2 Hyper V™ that moment has finally arrived. I have to thank Michel de Rooij at  for bringing this to our attention http://eightwone.com/2011/05/14/exchange-2010-sp1-live-migration-supported/. So now we have the best features in virtualization at our disposal and that simply rocks. We read:

“Exchange server virtual machines, including Exchange Mailbox virtual machines that are part of a Database Availability Group (DAG), can be combined with host-based failover clustering and migration technology as long as the virtual machines are configured such that they will not save and restore state on disk when moved or taken offline. All failover activity must result in a cold start when the virtual machine is activated on the target node. All planned migration must either result in shut down and a cold start or an online migration that utilizes a technology such as Hyper-V live migration.”

“Microsoft Exchange Server 2010 SP1 supports virtualization of the Unified Messaging role when it is installed on the 64-bit edition of Windows Server 2008 R2. Unified Messaging must be the only Exchange role in the virtual machine. Other Exchange roles (Client Access, Edge Transport, Hub Transport, Mailbox) are not supported on the same virtual machine as Unified Messaging. The virtualized machine configuration running Unified Messaging must have at least 4 CPU cores, and at least 16 GB of memory.”

And it is NOT ONLY for Hyper-V, look at the Exchange Team blog here “The updated support guidance applies to any hardware virtualization vendor participating in the Windows Server Virtualization Validation Program (SVVP).’” Nice!

Anyone who’s at TechEd USA 2011 in Atlanta should attend EXL306 for more details. Huge requirements yes, but the same goes for physical servers. That’s how they get the performance gains needed, it’s done by lowering IO by using large amounts of RAM.

Think about the above statement, we now have support for host clustering with live migration, possibly together with technology like for example Melio (SanBolic) on the software side or Live Volume (Compellent) on the storage side to protect against SAN Failure (local or remote) and combined with DAG high availability for the databases in Exchange 2010 (which can be multi site) this becomes a very resilient package. So to come back to my other post on a brighter future for public folders, if they can sort out this red headed stepchild of the Exchange portfolio they have covered all their bases and have a great platform with the option of making it better, easier and cheaper to implement, operate & use. No one will argue with that.

I know some people will say all this is overkill, to complex, to much or to expensive. I call it having options. When the S* hits the fan and you’re “in the fight of your life” wading your way through one or multiple IT disasters to keep that mail flow up an running it is good to have multiple options. Options mean you can get the job done using creativity and tools. If you have only one tool and one option Murphy will catch up with you. Actually this is one of my most heard shout outs to the team “give me options” when problems arise. But at what cost do these options come? That is up for the business and you to decide. We’re getting very robust options in Exchange that can be leveraged with other technologies for high availability that have become more and more main stream. This means none of all this needs to be bought and implemented just for Exchange. They are already in place. Unless your IT “strategy” the last 10 years was run Windows 2000 & Exchange 2000 until the servers fall apart and we don’t have any more spares available on e-bay before we consider moving along.

Consumerization of IT Discussions at BriForum 2011 London


At BriForum 2001 In London I also attended a lot of talks on BYOD and the consumerization of IT. The connection with BriForum is where VDI and user virtualization fit in to facilitate this. Now talk about this subject has been going on for about 5 years now and has been brought up at many TechEd sessions for example.

If that concept works, I say bring it on. Really. I mean it holds so many promises of a better world for everyone involved that we’d be nuts not to try it. I like the concept, but will it work, is it possible? If so where, when and to what extent. Anyway it’s all good stuff until that seems to require lawyers and contracts. Ouch! We’re not too good at dealing with that and I have to say that from my experience contracts are legal documents and are very useful in that arena but it won’t stop people from doing what they can where and when they can. They don’t think about using Hotmail or drop box of being “illegal” or against policy. They just use it. Look at any other corporate security and fair use policy. They are full of holes like a giant Swiss cheese. The ones demanding the policies are the ones doing most of the drilling.

But legalities aside, will it work on a very large scale in most places? Not right now I think. The dependency of the business on the current infrastructure is so big it can’t be replaced yet. So you need a transition and that means adding stuff & new possibilities and facilitating them. So initially it will only add complexity for the service desk. All the talk of not being able to retain the best and brightest might be true but the same goes for the IT personnel. You might retain a better MBA with your iPad & iPhone but you could very well lose some support personnel that go “BOINK” trying to assist a workforce with hundreds of devices and apps. Are devices and toys to be considered as benefits or as a true work instruments? Perhaps it attracts opportunistic gadget freaks instead of the best personnel. Do car policies help attract the best personnel in this day and age? I mean everyone offers it so it’s a level playing field. Perhaps not offering BYOD but providing really valuable environments works better. Flex work, telecommuting, better wages, interesting job content is still a lot better I think. The best people figure out fast that there is more and better to be had in remuneration than a device and your own app preferences.

Sure I know an iPad might attract a college graduate but they already have such high expectations (culture of entitlement) that perhaps this is not the best path to go. Corporate life is not like what you see on TV. They might as well learn that early. It’s not about a group of gorgeous young people acting important and professional whilst doing nothing, drinking rivers of macchiato form Starbucks and having affairs with the equally gorgeous colleagues. To complete the dream illusion they get paid generously for all that and at the end of the year receive a bonus to make a down payment on that city loft. Wake up! And be fair we’re talking top drawer human resources here and there in lies another issue as you’ll need to offer it to everyone in the company because, when you hear the lawyer talking, it opens you up to legal action due to discrimination if you don’t. Where is the differentiator than?

Now I’m not against it the concept. On the contrary I would love to see it work. But I’m afraid it’s not such a good proposition as it is made out to be when done in a structured way and on a large (read companywide) scale. Is it a perk or business value? I don’t have an iPad or an iPhone but I do use my tools and some devices out of corporate control to get my job done, so basically I’m there dudes. The main issue I still need to resolve is get employers to pay for expensive shiny toys I need to get my job done faster and better. The reason I don’t have them because I’m too cheap to buy them myself (so I don’t see the value to get my job done better?). But when the boss pays, well hello iPad! But I’d better not force my hand. I think my boss would say could luck at your next job if I ever told ‘m I takes an iPhone to retain me. But a CEO doesn’t have that problem. He gets a “right away sir” for an answer.

Is this for everyone? I’m not so sure. In the long term perhaps. Today no. I have generation-Y and millennial “kids” in my social circles and guess who’s asked to help them with all the tools, toys and gadgets? Right. They are indeed consumers! If you define digital natives as mere consumers than they fit the bill but I would suggest that the designation “digital natives” implies they can deal with all tech they use themselves at all times. In the end, when all self-service and tech support for their toys fail who do you think the problems ends up? Right. Ever dealt with a gadget junkie that is forced to go “cold turkey” in the blink of an eye? Face it, every helpdesk has to deal with recovering baby pictures, wedding movies, getting routers to work, helping with capturing a movie stream & configuring smartphones … consumers need support and that support has to be paid. Who does it and who pays is a different matter. Aren’t we just shifting it? What about contracts to make clear how does what, where and when? Have you ever work in a service desk in ICT for internal IT? Really? Where is all the “enabling of the business” when you’re waving with a contract as a user ends up at the service desk with a broken BYO device or application that was repaired but did not fix the issue and now they need help to get to the data stored in that obscure application you’ve never seen? And when it’s your manger are you going to put the contract in his or her face? What about the secretary that can make your life hell or heaven depending on how by the book they play? Sounds familiar? Same old, same old. One thing is for sure that cute, charming red head who’s very gadget minded and processes your requests for attending conferences doesn’t have a problem now and never will. No this is not sexist, it’s reality and you can always change the metaphor to reflect your own preferences, you’re totally free to do so Winking smile In essence what I’m saying here with freedom comes responsibility and ownership.

Then there are the practicalities who buys it and how does it get paid. You need have that figured out and organized. How do you deal with the legalities and auditing of licenses? Lawyer heaven Open-mouthed smile  Where are the tools to really manage devices and applications al those different vendors well?

Just some brainstorming and playing devil’s advocate here. Who wants this for work? Geeks. Who wants this a perks? Employees. Who wants this as a business? People selling solutions to manage and facilitate this. What does the business want?  The fact is consumerization of IT is already a reality. It just happens. It will be interesting to see how we all deal with it, why those choices are made and what their effects are. Feel free to chime in via the comments.

Trip To London for BriForum


I’m on the Euro Star to London. Well we’re supposed to be heading there but we’re standing still in the middle of nowhere due to a train in front of us having some sort of (technical?) difficulties. The rails at this spot are not level so we’re hanging over a bit to the left. To make this little uncomfortable situation even a bit more uncomfortable I’m in a coach with a bunch of Dutch high school kids on a trip. Let’s just say that they are verbally strong and over active Smile

My side kick on this trip handed me an iPhone with Angry Birds so I can have a go at this blasting pigs to get some eggs back.

The Euro Star needs a haul over I think. The wagons are a bit worn out an are starting to show their age. As to the promise of fast travel … well at the moment I’m not very impressed with this high speed as we’re standing still … no we’re moving as I type this at cycling speed  … and now we’ve stopped again. Right …it’s like being on a local commuting train that stops in every village.

No need for the telco industry to worry about mobile internet business. Internet on trains doesn’t seem to be taken off.

So I’m at the hotel. 1,5 hours later than planned. I’m off the register and attend the welcome reception at BriForum next. Sunny day in London, that’s always nice.