Friday 28 October 2011

Building a bootable Citrix Xendesktop USB with enterprise wireless support

Citrix Xendesktop gives us an unbelievable amount of flexibility in our environment, one of the great possibilities is "secure" bring your own device scenarios.

I would love students and staff to be able to bring their own devices and securely connect to our network without configuration and IT support, which has been impossible in the past. We have considered scenarios of users installing clients and configuring settings but all of these have potential problems. What happens if an end user breaks their personal system while trying to connect to our network? Who is responsible for the support?

Enter the idea of a Linux bootable USB stick with the Citrix client preloaded, no user settings are ever changed and all configuration is taken care of. I have chosen Ubuntu 10.04 LTS as my OS of choice as it is flexible, easy to configure and is known to boot well from a USB.

For this to work in our environment, it needs to boot up, connect to the network and launch a Citrix login prompt automatically. As I don't want to open any "public" networks at this stage, I am using a secondary wireless network I already have in production, this network requires WPA2-Enterprise authentication. This will add some extra complexity to my set-up as I will need to load certificates into my image and configure WPA_Supplicant to automatically connect to my network using those certificates.

I am also going to load the latest Adobe Flash package and of course the Citrix client package.

Finally I am going merge back the casper-rw persistent changes into the live boot USB's squash file system so the USB's can be reused over and over again without being re-imaged.



Prerequisites
  • The Ubuntu 10.04 iso
  • Any required CA, private keys and user certificates
  • An internet connection
  • An empty USB stick (primary)
  • A second USB stick for storing the created file (secondary)


Lets get into it!
1. Copy Ubuntu onto the primary USB stick, I won't include a tutorial here, but the Ubuntu site has great tutorials. I used their Windows based Universal USB Installer tutorial. Ensure you create the USB stick with a persistent storage of at least 512MB. This "persistent storage" allows us to make changes that are kept after a reboot.

2. Boot Ubuntu off the primary USB stick.

3. Start by setting the wallpaper you want, I have set a wallpaper that says "Please be patient as a connection is established..."

4. If you are connecting to an open wireless network you can simply configure it in Network Manager and let it take care of the rest, but WPA2-Enterprise networks are slightly different. If you are using WPA2-Enterprise authentication then continue, otherwise you can skip to step 5.

Depending on your luck, the current humidity and if your shirt is purple or green, Network Manager may work with a WPA2-Enterprise network and your certificates or it may not. For this reason I use the more robust WPA_Supplicant and get my hands dirty on the command line.

First we need to remove Network Manager
apt-get remove network-manager
Now lets configure WPA_Supplicant, first create a certificates folder under /etc/wpa_supplicant and give it the appropriate permissions.
sudo mkdir /etc/wpa_supplicant/certs
sudo chmod 700 /etc/wpa/supplicant/certs
Then copy your CA certificate, client certificate and client key in PEM format to the /etc/wpa_supplicant/certs directory. If your certificates have come from Active Directory in PFX format you will need to convert them to PEM. This can be a difficult step in the process, but the WPA_Supplicant.conf man pages has some great tips on this one. You can use the below command to convert PFX certificates to PEM, but anything more is outside the scope of this tutorial.
Converting your client certificate and private key
openssl pkcs12 -in example.pfx -out user.pem -clcerts

Converting PFX CA certificate
openssl pkcs12 -in example.pfx -out ca.pem -cacerts -nokeys
Next create a config file under /etc/wpa_supplicant called config.conf and enter the following information
          network={
            ssid="networkname"
            key_mgmt=WPA-EAP
            scan_ssid=1
            eap=TLS
            pairwise=CCMP TKIP
            group=CCMP TKIP
            identity="username@domain"
            ca_cert="/etc/wpa_supplicant/certs/ca.pem"
            client_cert="/etc/wpa_supplicant/certs/client.pem"
            private_key="/etc/wpa_supplicant/certs/client-key.pem"
            private_key_passwd="test"
        }
As you can see from the above configuration you need to customize your SSID, thename of the certificates, identity and private_key_passwd if there is one.

5. Install the Adobe Flash and Citrix packages, both are available from their respective websites and both are very easy to install on Ubuntu (you should open them from within Firefox straight to the package manager which handles the rest).

6. Now open Firefox and set your home page as your Citrix address. I have my Citrix Access Gateway available in this wireless network so I set the address of my Access Gateway as the home page.

7. Now we are going to put a very simple bash script into our home folder called go, which reads as follows.
sudo killall -9 wpa_supplicant
sudo /sbin/wpa_supplicant -c /etc/wpa_supplicant/config.conf -iwlan0 -B
sudo /sbin/wpa_supplicant -c /etc/wpa_supplicant/config.conf -iwlan1 -B
sudo /sbin/dhclient wlan0
sudo /sbin/dhclient wlan1
/usr/bin/firefox

Notice I have listed wlan0 and wlan1, this is to cover the fact my target system might have totally different hardware (or multiple adapters) and by specifying two interfaces we are covering our bases (at least one should work).
This script will establish a wireless connection, grab a DHCP IP address and then start Firefox, which should open to your Citrix homepage.
8. If you are connecting to a SSL site (which I hope you are since this is on a "public" network) then we need to copy the Firefox trusted certificates into the Citrix store. If we don't perform this operating Citrix won't trust the SSL certificate on your site and will fail to launch a desktop.
sudo cp /usr/share/ca-certificates/mozilla/* /usr/lib/ICAClient/keystore/cacerts/
If you are using a self generated or certificate generated by a private CA then your need to import the CA certificate into the Citrix store.
9. Lastly we need to set our go script to launch on user login.

Go to the System menu > Preferences > Startup Applications, then click "Add", Name it "Citrix" and in the command field enter "/home/ubuntu/go" and then click "Add".

10. Shut-down Ubuntu and the changes will be written to the persistent file (casper-rw in the root of the USB stick).
At this point it might be worth re-booting into Ubuntu again to ensure it does indeed connect to the wireless network and launch Firefox with your Citrix login page before proceeding.



Additional customizations

We have done the bulk of the configuration, but we still need to make a few changes to the boot loader.

Lets edit the text.cfg to cut down on the options presented to the end user. The only option I want presented is the ability to boot the Ubuntu Live CD, this should help cut down on any potential accidents.

To do this insert the USB stick into your Windows system, open the /syslinux/text.cfg file and make it read as follows.
    default live
    label live
      menu label ^Run Ubuntu from this USB
      kernel /casper/vmlinuz
      append noprompt cdrom-detect/try-usb=true persistent file=/cdrom/preseed/ubuntu.seed boot=casper initrd=/casper/initrd.lz splash --
We can take this a step further by making a change to /syslinux/syslinux.cfg that will totally surpress the boot menu, but you don't need to do this if you don't want. If you want to remove the boot menu, just remove the following line from syslinux.cfg.
default vesamenu.c32
Save the changes and continue to the next section.



Merging the casper-rw changes back
1. Open the primary USB stick in windows and rename casper-rw to casper1

2. Boot Ubuntu from the USB stick again, you will notice when you get back into Ubuntu the changes are all missing, don't worry, we have done that on purpose.

3. Install the mksquashfs package, this will allow us to re-create the squash file system with our merged changes.
sudo apt-get install squashfs-tools
4. We need to make some temporary directories and mount the files we plan to merge. The following commands will create the temporary directories, mount the persistent changes file, mount the read-only operating system file and then overlay them both in the /tmp/tmp-squash directory.
cd /tmp
mkdir -p tmp-squash tmp-rw tmp-sqfs
sudo mount -o loop /cdrom/casper1 tmp-rw
sudo mount -o loop /cdrom/casper/filesystem.squashfs tmp-sqfs
mount -t aufs -o br:tmp-rw:tmp-sqfs none tmp-squash
5. Insert your secondary USB drive. My USB is named "USB" so it mounted under /media/USB/
6. There is one last configuration change we need to make before we write the changes back and that is to remove the "Install Ubuntu 10.04" icon from the Desktop. We don't want users accidentally installing Ubuntu over their current operating system.
rm -f /tmp/tmp-squash/home/ubuntu/Desktop/Install*

6. Now we need to "squash" the contents of these folders into a single file. This will mean when we boot this USB in the future, the changes we previously made are always present and reset after every reboot.
sudo mksquashfs tmp-squash /media/USB/filesystem.squashfs
7. When the process is complete shut-down Ubuntu and move back to your Windows machine and insert both USB sticks.

8. On the primary USB drive you can remove the casper1 file, it might be worth backing up in case you want these changes in the future.

9. Copy the filesystem.squashfs file you created in step 6 from the root of the secondary USB to the /casper folder on the primary USB. You should be prompted to override the file, click yes.

You all done! You now have a read-only boot-able Linux based Citrix client that should work on a large number of devices. I have tried 5 devices in my network and they all work beautifully.

You can now image multiple Ubuntu USB flash drives and copy your custom filesystem.squashfs to make them instant Citrix access drives.

Wednesday 26 October 2011

Linksys WRT150N sd card modification with DD-WRT

 

The WRT range of wireless routers from Linksys have an amazing range of capabilities for the price. A simple flash to the WRT firmware opens up endless enterprise class features on devices priced under $100.

A couple of years ago I picked up a WRT150N in a fire sale for $49 and quickly set about maximizing this device by flashing to the latest DD-WRT firmware.

I was mainly interested in upgrading my home wireless to WPA2-Enterprise certificate based authentication with a Freeradius backend. After upgrading your WRT150N series router to DD-WRT you can install the Freeradius package and allow your little consumer router to not only handle the wireless, but also the enterprise authentication.

For anyone not running a full time server this is perfect, everything is integrated into the one package, low cost, low power usage. The one key drawback is the space, with a modest 16MB of usable space to store the your programs and logs, it certainly is limited.




The SD card mod

Investigating possible solutions to my problem, I came across the SD card mod, allowing a hardware modification of the WRT150N to add a SD card for extra space.

The modification involves taking an existing SD card reader casing, and connecting it to the WRT150N main board with a series of wires. I could not find any correct wiring diagrams for the WRT150N on the internet, so my wiring diagram is available below.

I would suggest using a 512MB or 1GB SD card as I have numerous issues getting larger cards to work consistently, at least while you get everything initially set up a smaller card is a good idea.



Performing the modification 

Before we get started make sure you device has the DD-WRT firmware of your choice, I have settled on the "DD-WRT v24 (05/24/08) std" firmware as it fits my requirements.

1. Crack open your 150N to expose the mainboard and components.

2. Start by cutting out a slot in the back of your 150N for the SD card to be inserted into. Alternatively you can simply mount the SD card inside your 150N but then of course you have to remove the cover if you need to access it.

3. Follow my below wiring diagram to connect the SD card reader to the appropriate GPIO points, the SD card has 9 gold connectors but we are only using 7. Please excuse my absolute lack of soldering skills.
You need to connect all the numbers from the above SD card connector diagram to the appropriate PCB locations below, for example 1 to 1, 2 to 2, etc. The only exception is a jumper wire needs to run between point 2 and 5 on the SD card reader, as indicated above in purple. The "5" point will have the jumper wire and another wire going off to the PCB connected to the single point.





4. Insert your SD card, turn on your WRT150N and log into the web GUI menu. Navigate to the Administration tab and find the "MMC/SD Card Support" menu. You need to configure your settings as mine are below.
  • MMC Device - Enabled
  • GPIO pins select - Manual
  • GPIO pins
    • DI: 5
    • D0: 4
    • CLK: 3
    • CS: 1
 
5. Apply your settings and then reboot your 150N.

After your device has rebooted, log back into the web GUI and navigate to the Administration tab. If all is well you will see the "Total / Free Size" value under the "MMC/SD Card Support" menu populated with the size of your SD card. As you can see above, my 1GB card shows a total size of 952.96 MB and a free size of 888.52 MB.



Troubleshooting

If your "Total / Free Size" value is not reading correctly, first try removing the SD card and manually formatting it with the ext2 file system. After the format is complete, re-insert the SD card and reboot the 150N.

If you are still having issues, please recheck your wiring is perfect. As a last resort I did find some SD cards (especially cards larger than 1-2GB) may not work in the reader. If you have a larger card, try and get your hands on a small 512MB or 1GB card and try again.

Hopefully you don't do the same thing as I did and snap the top of your SD card reader mount off, nothing a big old chunk of hot glue couldn't fix though!

Windows 7 Media Center (MCE) command line recording application

During a previous home automation project using Debian Linux, X10 power management, Asterisk PABX software and Windows Media Center I found a distinct lack of flexibility with Windows 7 MCE command line recording. I wanted to be able to phone my Asterisk VOIP phone number and have it schedule a recording on my behalf.

While Windows 7 MCE is amazing at recording scheduled shows you set via the UI, unfortunately there is no good way to record shows from the command line.

After doing some research I found information on the Microsoft ClickToRecord API and decided to have a crack at building an application with my limited (OK, almost non-existent) C# knowledge.

After toiling away for a few hours I was able to make a working code example.

Binary Link
Source Link

Feel free to take my source and do what you want with it, PLEASE clean it up and send it back to me.


How it works

This is a two step process, first to create the recording XML file, then import it into my schedule.exe.

1. Create an XML file as per below.

<?xml version="1.0" encoding="utf-8" ?>
<clickToRecord xmlns="urn:schemas-microsoft-com:ehome:clicktorecord">
    <body>
        <programRecord programDuration="30">
            <service>
                <key field="urn:schemas-microsoft-com:ehome:epg:service#mappedChannelNumber" match="exact">7</key>
            </service>
            <airing>
                <key field="urn:schemas-microsoft-com:ehome:epg:airing#starttime">2011-10-26T12:00:00+10:30</key>
            </airing>
        </programRecord>
    </body>
</clickToRecord>
You can see the XML file is very simple, I have set up a scheduled record on channel 7  for 30 minutes at 12:00PM on October 26, 2011.

 The key properties are:

The program duration, set as 30 minutes in my example.
<programRecord programDuration="30">

The channel number, set as channel 7 in my example.
<key field="urn:schemas-microsoft-com:ehome:epg:service#mappedChannelNumber" match="exact">7</key>

The start time, set as 12:00PM on October 26, 2011 (in UTC + 10:30 hours time zone)
<key field="urn:schemas-microsoft-com:ehome:epg:airing#starttime">2011-10-26T12:00:00+10:30</key>


2. Simply execute the XML with the following command
schedule.exe filename.xml


My Usage

I use an Asterisk script to generate the XML file when I call. First it prompts me for the channel, then program start time and date. Next the MCE is started with a wake-over-lan boot if it is switched off, then XML file is uploaded to the MCE and is then triggered with Sysinternals psexec on the MCE itself.

I hope you find this little utility as useful as I did!

Monday 24 October 2011

Windows 7 Media Center is very slow to load after a reboot

I have had a media center (centre for those Australians out there) since Microsoft first brought out XPMCE, the product has come a long way from one that needed 1000 plugins to function, to something that can stand on its own as a good MCE.

These days I am running Windows 7 MCE with four tuners and I depend on it to record all my TV for me, after 5 years of skipping through advertisements, its hard to go back!

Every so often I find the Media Center because extremely slow, slow to load, slow to open channels or movies and slow to navigate any network paths.



The Solution

The fix (as long as you haven't contracted some nasty malware along the way) is to remove the MCE index. I have completed this process a few times and have observed a few drawbacks.
  • All your music and movies will need to be reindexed
  • You lose your channel favourites and ordering, but you don't have to rescan your tuners or reset scheduled records
  • This will also effect your Windows Media Player libraries


1. Open Windows Explorer and navigate to %LOCALAPPDATA%\Microsoft\
2. This will take you to a path something like c:\users\ <username>\AppData\Local\Microsoft\

3. The folder we need to remove is "Media Player" but before we do that we need to stop some services. Close any active Media Player or Media Center instances and then open a command prompt, you can do this by going to the start menu and clicking run and then typing cmd.exe and pressing enter.

4. Before you run the two below commands, please be aware that any current recordings will be stopped, so if your favourite show is recording you may want to delay this until later.

In the command prompt type:

net stop ehRecvr
net stop WMPNetworkSVC

 
 

4. Now we can rename the "Media Player" folder. I prefer to rename instead of deleting just in case I realize I need something from the directory later. Rename the "Media Player" directory to "Media Player.bak"

5. Now restart your computer. Alternatively you can restart those two services, but because running services might differ from machine to machine, a more conclusive option.

If your Media Center is still slow after doing this it might be worth looking at other programs you have running in the background. I have completed this process 2-3 times now and it perks up my MCE performance every time!

Monday 17 October 2011

Windows 7 SP1 optimizations for Citrix Xendesktop 5 and 5.5 with PVS

Virtual desktops is all about return on investment, lower the total cost of ownership while reducing support and delivering a service as good if not better than available on a traditional thick client. A key component of reducing the cost is density; how many virtual machines can you fit on a single server and for what cost?

Recently I started working on an optimization package that I could apply to most of my vDisks to lower ram usage, lower disk IOPS usage and ensure a smooth end user experience. The below list of optimizations have been cobbled together by me from other peoples recommendations, best practices and some optimizations I thought might be useful.

These optimizations include dropping un-needed events from event logs, capping event log size, optimizing the TCP stack for PVS, disabling un-required services, disabling visual effects and optimizing the file system. Many of the same optimizations I use when building an efficient over clocking operating system work great with virtual desktops, so this was a fun project for me.

I am using these optimizations on my production machines and they are all working well, but you should still thoroughly test them in a development environment before going live in case they conflict with your current configurations.

Simply copy the below into a .cmd batch file, boot your vDisk in private mode and execute the file.



The Optimizations

REM ----------------SOF HERE----------------

@echo off

REM Xendesktop 5 optimizations by James Trevaskis
REM Published on 17/10/2011 - Use at your own risk

REM TCP optimizations
reg add "HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\parameters" /v DisableDHCPMediaSense /t REG_DWORD /d 1 /f
reg add "HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\parameters" /v KeepAliveTime /t REG_DWORD /d 60000 /f
reg add "HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\parameters" /v KeepAliveInterval /t REG_DWORD /d 100 /f
reg add "HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\parameters" /v TcpMaxDataRetransmissions /t REG_DWORD /d 10 /f
reg add "HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\parameters" /v DisableTaskOffload /t REG_DWORD /d 1 /f

REM server and workstation service optimizations
reg add "HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters" /v MaxWorkItems /t REG_DWORD /d 512 /f
reg add "HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters" /v MaxMpxCt /t REG_DWORD /d 2048 /f
reg add "HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters" /v MaxFreeConnections /t REG_DWORD /d 100 /f
reg add "HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters" /v MinFreeConnections /t REG_DWORD /d 32 /f
reg add "HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters" /v MaxCmds /t REG_DWORD /d 2048 /f
reg add "HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters" /v UtilizeNTCaching /t REG_DWORD /d 0 /f
reg add "HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters" /v MaxThreads /t REG_DWORD /d 17 /f

REM memory management
reg add "HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v LargeSystemCache /t REG_DWORD /d 1 /f
reg add "HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v IoPageLockLimit /t REG_DWORD /d "65536" /f
reg add "HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v DisablePagingExecutive /t REG_DWORD /d 1 /f
reg add "HKLM\System\CurrentControlSet\Control\Session Manager" /v RegistryLazyFlushInterval /t REG_DWORD /d "30" /f

REM netlogon wait
reg add "HKLM\SOFTWARE\microsoft\Windows NT\CurrentVersion\Winlogon" /v WaitForNetwork /t REG_DWORD /d 1 /f

REM dont display last logon name
reg add "HKLM\Software\Microsoft\Windows\CurrentVersion\Policies\System" /v DontDisplayLastUserName /t REG_DWORD /d 1 /f

REM disable services
reg add "HKLM\SYSTEM\CurrentControlSet\Services\wuauserv" /v start /t REG_DWORD /d 4 /f
reg add "HKLM\SYSTEM\CurrentControlSet\Services\CiSvc" /v start /t REG_DWORD /d 4 /f
reg add "HKLM\SYSTEM\CurrentControlSet\Services\BNNS" /v start /t REG_DWORD /d 4 /f

REM increase services timeout
reg add "HKLM\SYSTEM\CurrentControlSet\Control" /v start /t REG_DWORD /d "120000" /f

REM priority control optimize foreground tasks
reg add "HKLM\SYSTEM\CurrentControlSet\Control\PriorityControl" /v Win32PrioritySeparation /t REG_DWORD /d 38/f

REM disable dr watson
reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AeDebug" /v Debugger /t REG_SZ /d "" /f

REM visual effects (you may or may not want to apply this)
reg add "HKLM\Software\Microsoft\Windows\CurrentVersion\Explorer\VisualEffects" /v VisualFXSetting /t REG_DWORD /d "2" /f

REM print optimizations
reg add "HKLM\SYSTEM\CurrentControlSet\Control\Print\Providers" /v EventLog /t REG_DWORD /d 0 /f
reg add "HKLM\SYSTEM\CurrentControlSet\Services\Spooler" /v ErrorControl /t REG_DWORD /d 2 /f

REM event log
reg add "HKLM\SYSTEM\CurrentControlSet\Services\EventLog\Application" /v MaxSize /t REG_DWORD /d "2097152" /f
reg add "HKLM\SYSTEM\CurrentControlSet\Services\EventLog\Application" /v Retention /t REG_DWORD /d 0 /f
reg add "HKLM\SYSTEM\CurrentControlSet\Services\EventLog\System" /v MaxSize /t REG_DWORD /d "2097152" /f
reg add "HKLM\SYSTEM\CurrentControlSet\Services\EventLog\System" /v Retention /t REG_DWORD /d 0 /f
reg add "HKLM\SYSTEM\CurrentControlSet\Services\EventLog\Application" /v Retention /t REG_DWORD /d 0 /f
reg add "HKLM\SYSTEM\CurrentControlSet\Services\EventLog\Security" /v MaxSize /t REG_DWORD /d "2097152" /f
reg add "HKLM\SYSTEM\CurrentControlSet\Services\EventLog\Security" /v Retention /t REG_DWORD /d 0 /f

REM disable terminal services client printer mapping
reg add "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp" /v fDisableCpm /t REG_DWORD /d 1 /f

REM file system
reg add "HKLM\SYSTEM\CurrentControlSet\Control\FileSystem" /v NtfsDisableLastAccessUpdate /t REG_DWORD /d 1 /f

REM misc
reg add "HKLM\SYSTEM\CurrentControlSet\Control\Windows" /v ErrorMode /t REG_DWORD /d 2 /f

echo Xendesktop optimizations complete



REM ----------------EOF----------------

Wednesday 12 October 2011

How to replace Apple Macbook Air 2010 3.1 SSD with MX-Katana and performance analysis

I love my Macbook Air 2010 3.1 11" for its portability and size but the stock Toshiba SSD is lack luster in performance and plus I only have the 64GB version. After checking out my available upgrade options I got my hands on the Mach Xtreme MX-KATANA 128GB.

For those not familiar with Mach Xtreme, they are a manufacture focussed on the enthusiast sector and high end. I reviewed their MX Armor 2000 9-9-9-24 2x4GB memory kit in a shoot out for APC earlier in the year and I was suitably impressed.



The Product

Name: MX-KATANA
Model: MXSSD2MKAT-128G
Maximum read performance: 275 MB/s
Maximum write performance: 225 MB/s
Sustained Write: 130 MB/s
IOPS: 25,000

Very impressive specifications for a notebook drive, lets hope my heightened expectations stay true in the performance analysis.

 

  

I was very happy to find the necessary screw drivers included in the package as Apple use a "5-Point Pentalobe" driver, which isn't exactly a screwdriver most people have in their collection.



The Installation

Before you begin, I suggest making a copy of your existing SSD with a tool like carbon copy cloner (CCC). If you use CCC to clone your SSD onto an external USB hard drive, after you have installed your new SSD you can boot from the USB clone and use CCC to clone the data onto your new MX-KATANA SSD.

I have created the below youtube video which details the complete process of removing the back cover, installing the new SSD and replacing the cover.

Here are some more photos of the installation process.







Benchmarks

I have decided to use XBench and QuickBench to benchmark the existing Toshiba SSD and the new MX-KATANA, this will allow me to directly compare the two.

I set QuickBench to 5 loops and for Xbench I reran and averaged the results over 3 complete runs.




Performance Analysis


Starting with the QuickBench (QB) average for sequential read, we see the KATANA take a small lead of around 10 MB/s, nothing spectacular but solid. Moving onto QB random write average and again we see a small margin of only 7 MB/s.

I also listed the 1024K random write, this is not an anomaly but a consistent result I see across all of my benchmarks. As the write size is increased the performance leans more in the direction of the KATANA.

Moving onto the XBench results we see a meagre 1% advantage to the KATANA in the disk test score results. In the un-cached write 4K results the stock Toshiba SSD actually takes a lead of around 9 MB/s, but the victory is short lived when the KATANA smashes it in large 256K read operations by 20%.

These results certainly give a clear indication that the KATANA is a great option for those using large cluster sizes, but who is? I don't know many people using anything bigger than around 8-16K, unless it is a dedicated drive for video/audio editing.

That being said if you are working with big files the KATANA will have big benefits for you, but for everyday use you can expect around a 10% performance gain over the stock SSD.

Friday 7 October 2011

Lync 2010 reports lync cannot connect to the exchange server

While this hasn't been causing us any problems (yet), I am planning to use Lync 2010 with Exchange 2010/Outlook calendar integration shortly and until all Exchange connectivity issues are resolved, this can't be achieved.



The Problem

After the Lync 2010 client has been open for around a half hour or so, a red error box appears in bottom right hand corner of the client, clicking on the error displays a message.
Exchange Connection Error - Click to display details.

and then after clicking that error we see a further error message.
Lync cannot connect to the Exchange server. Lync will attempt to retry the connection.


Investigating the problem

To get some more information you can hold CTRL and right click on the Lync 2010 icon in the notifications area and select the "Configuration Information" window. This window shows how Lync is setup and potentially will display any problems.

On my configuration I noticed the "EWS Internal URL" and "EWS External URL" were both blank, and the "EWS Information" had a status of "EWS not deployed". I know it certainly is deployed, but just to be sure I verified it was configured and there was a instance of EWS under my Exchange 2010 IIS instance.


What I also noticed after browsing through my registry, was that the "Autodiscover" key was totally missing our of my Lync configuration. The following key should exist, replace <SIP> with your sip account name, so for me it was jtrevaskis@domain.edu.au

HKCU\Software\Microsoft\Communicator\<SIP>\Autodiscovery

If the Autodiscovery registry key does not exist, then your Lync client has not been able to connect to the Autodiscovery service of outlook and resultantly it cannot enumerate the locations of critical Exchange services such as EWS that will seed the Lync client information.



But I already have the Autodiscover service configured perfectly

Well that is what I thought too! What I didn't realize is that when I originally configured Lync I set my client domain as my external domain. It is important to identify what your SIP domain is, so when you log into Lync, what address is used, for me it was my external email domain and this is the root cause of my problems.

Internal Domain: domain.internal
External Domain: domain.edu.au

So while I have my DNS records for autodiscover.domain.internal A and _autodiscover._tcp.domain.internal SRV configured perfectly, the Lync client is actually looking for for autodiscover services at autodiscover.domain.edu.au and _autodiscover._tcp.domain.edu.au

I ran a quick Wireshark session to confirm my suspicions and within a few minutes of starting the Lync client I was seeing DNS lookup requests for _autodiscover._tcp.domain.edu.au



The Solution

I don't really want to offer Autodiscover services to the outside world and I don't want to publish _tcp records onto my external DNS services. After all, my Lync clients are being used internally only, so I wanted to find a solution that would work internally without disclosing too much information.

I got it! OK this is a little bit of a hack, but it works perfectly for me, and ensures none of my internal DNS records need to be published on the internet.

1. I created a _tcp.domain.edu.au zone on my internal DNS servers.

2. In my newly created zone, I added a SRV record by right clicking the zone and selecting "Other Records" and then "SRV".

3. I entered _autodiscover as the service type, _tcp as protocol, 443 as port number and the FQDN of my Exchange IIS service instance that is offering my Autodiscover and EWS services.

That is it, simple as that! I restarted my Lync client and within 5 minutes the "Configuration Information" window was displaying the correct EWS URL's and the HKCU\Software\Microsoft\Communicator\<SIP>\Autodiscovery registry key had been created and fully populated.

If you want to use Lync externally or already have external _tcp DNS records, then you can easily point your clients to an external EWS server and achieve the same result.

The main requirement is that a _autodiscover._tcp SRV record points directly to your Exchange IIS instance that is hosting EWS. As long as you achieve that, be it via a hack like mine, or publishing external _autodiscover._tcp records and pointing them to an external EWS instance, it should work.



Other issues worth investigating

If you are still having issues, it would be worth checking your Exchange 2010 IIS instance to ensure Autodiscover and EWS are configured and that you can reach the Autodiscover service via https://domain/Autodiscover/Autodiscover.xml to which you should see some XML random returned.

Also ensure that when you visit you Exchange IIS instance https://  in a web browser via the same FQDN you entered in your DNS SRV record above, you receive no certificate errors. If there are any certificate errors you will need to resolve them before Lync will be able to reach the Autodiscover service.

Thursday 6 October 2011

Upgrading from Citrix PVS 5.6 SP1 to PVS 6.0

Citrix recently released their latest version of Provisioning Services (PVS), version 6.0, with a number of new features.

The most substantial of these is the vDisk Update Management, allowing the administrator to control patch management with SCCM and windows updates automatically. This involves having a virtual desktop host dedicated to recieving updates and changes, and then PVS can move the vDisk automatically into a development or production environment according to your requirements.

PVS now also supports thin provisioning of vDisk changes, so instead of copying that 25GB vDisk and then making changes on it, a VHD differencing disk is created, which can then be slipstreamed back in the original when everything is just right. This makes sense as it will decrease space and time required to make new versions of vDisks.

The last major update is a DR feature for vDisk distribution, allowing you to easily distribute a vDisk across multiple PVS servers or sites.



The Upgrade Process
1. Stop the PVS streaming service from the windows services MMC snap-in.

2. Make a backup of the PVS SQL database. Dont skip this step!

3. If you are using PVS in conjunction with Xendesktop (XD) and you don't have any high availability, put your XD pools into maintenance modes.

4. Un-install your current PVS version and then reboot.

5. After the reboot, simply insert your PVS DVD or ISO and run the PVS_Server_x64.exe (or PVS_Server.exe if you are on 32bit OS) from the "Server" directory.

6. Follow the prompts until the installation is completed, then follow the PVS configuration wizard. If all goes well you will not need to configure anything at all, you can accept all the defaults.

7. Install the PVS console by running the PVS_Console_x64.exe (or PVS_Console.exe) from the "Console" directory on the DVD.
The installation should be complete, but if you are using XenDesktop you may need to do some troubleshooting to resolve problems that may have occured during the uninstallation and upgrade.



Problems and Resolutions

After I upgraded my PVS to 6.0 I was able to launch the console without a problem, but I was unable to use the 'XenDesktop Setup Wizard' and when I tried open the Desktop Studio console I received the following error message.
The Windows PowerShell snap-in 'Citrix.Broker.Admin.V1' is not installed on this machine.

It seems that when I uninstalled PVS 5.6 SP1, it may have inadvertently removed or damaged some Desktop Studio power shell snap-ins.

The resolution is very simple though, pop in your Xendesktop 5.0 or 5.5 DVD or ISO and reinstall the following two files.
\x64\DesktopStudio\PVS PowerShell SDK x64.msi
\x64\Citrix Desktop Delivery Controller\Broker_PowerShellSnapIn_x64.msi


When launching the above msi files, firstly click uninstall, then re-run them and install them both. After completing this process your Desktop Studio and PVS console should work perfectly.

3000MHz with 8GB of ram (4x4GB) just for fun

After the recent Geil competition where I took 16GB of ram to 2720 MHz to win the competition, I decided to have a play with the same GSKILL Trident 2000 MHz 8-9-8-24 kit in a 8GB dual channel configuration.

The result after a couple of hours of tweaking was a cool 3000 MHz!

Setup
2x GSKILL Trident 2000 8-9-8-24 4GB
Gigabyte X58A-UD7 rev 2.0 with GOOC2010 bios
Intel 980X Gulftown CPU
Single Stage @ -50
Western Digital Velicoraptor 600GB SATA3
Windows 7 32bit SP1



Results

CPU-Z validation link

Click on the below photo to expand it.




I am starting to play with my Corsair GTX6 sticks on the same set-up now with hopes of breaking 3400 MHz.

Wednesday 5 October 2011

Attempting to make Virtual Desktops with old hardware attractive to the end user

As I have mentioned on a number of previous blog posts, I am not happy with any of the cheaper thin endpoints for delivering video across the network, they simply don't work well. Of course you can spend $600 and get a thin client powerful enough to deliver video but after adding a keyboard, mouse, monitor and licensing, you might as well buy a traditional thick client.

We have migrated a slice of our environment to Xendesktop using old hardware as clients and it has been working exceptionally well from a technical perspective, but its downfall is that people are apprehensive to use "old looking" endpoints. Some of this can be addressed with education, "Yes I understand it looks like the laptop you were using 5 years ago, but actually..", but there is still that aesthetic aspect


While aesthetics is not important in some areas, it does become more of an issue in rooms where we might have external companies or individuals visiting. One of these areas is a presentation room in our Library, which often accommodates external visitors and staff meetings. We have decided to install thin clients in this room and progressively move them out to the rest of the Library, but as this room is used by 3rd parties, aesthetics is a consideration.

The Library computers are nearly exclusively used for Internet and Microsoft Office, with a small percentage of other applications being used making them PERFECT candidate for thin clients.



The Solution

We are using old laptops for this room so we quickly decided we needed to purchase a new monitor, mouse and keyboard. This added another layer of complexity as the laptop lid needs to be opened to access the on/off switch (as the switch is inside the laptop lid). Our solution was to have stands made with a laminate finish that matches the existing benches in the room. The units hide the laptop and most of the cables while acting as a stand for the monitor to sit.

You can see in the above photo we have had a opening made at the back for the cables to be routed though and another opening for the laptop screen to "pop-up" though.

A headphone jack and USB port have been built in to the front, to provide easy access for the end user. When you sit in front of this system you totally forget you are at a thin client.

 Above is an image of the fully configured system running.

I wouldn't change very much if I was to rebuild them. Perhaps increase the rear recess by about 2CM to allow easier access to the on/off button.



Total Cost

Custom built stand - $40
Cables - $10
Monitor - $155
Keyboard/Mouse - $20
Server slice + Licensing - $320

Total = $545

Traditionally a desktop build for business or education will cost about $1000 per unit, so in comparison a build like this where you reuse old systems can significantly decrease total cost of ownership, not to mention savings from decreased management.

Our plan is to cycle old laptops through these areas, so every couple of years as more laptops are decommissioned they can be installed as thin clients.

If you have any of your own similar solution I would love to see them!

Saturday 1 October 2011

James Trevaskis wins HWBOT 16GB memory clocking challenge with 2720 MHz

Overclocking competition overlords HWBOT recently held a memory overclocking competition in conjunction with manufacture Geil. The competition consisted of 3 rounds including Super PI, lowest memory clock and highest memory clock with 4x 4GB modules.

I decided to enter the 16GB of ram challenge, I have played around with clocking 16GB of memory in the past but not seriously pushing the frequency. I chose the GSKILL Trident 2000 MHz 8-9-8-24 kit as I have tested this kit vigorously in 2x4GB 8GB configurations and easily achieved 2500 MHz.

Starting with the P55 platform and an Intel Lynnfield 870 CPU, I was only able to achieve 2480 MHz with the timings of 9-11-9-27. At this point I thought I was done as P55 is known as the best memory clocking platform.

I quickly moved to X58 and this is where the real magic happened. I relaxed the memory timings to 11-15-15-31, TRFC at 200 and loosened the B2B latency. I started up my single stage cooling, which is an evaporator based cooling solution (like a fridge for your CPU), which set my CPU at -50 degrees Celsius. Instantly I hit 2600 MHz, wow this is great. After playing with some more of the sub timings I was at 2720 MHz before I knew it.

What I had achieved in 6 hours with my P55 setup I was passing within an hour on my X58 system. I put this down to the power of the IMC on Gulftown. Sure P55 has traditionally been great at memory clocking, but this is because uncore isn't a limitation. I believe X58 is probably the stronger platform for clocking any high density memory modules as it is a easier to drive 3 modules in triple channel and the 4th in its own channel than 2 modules over 2 channels as per P55.


My Setup:
4x GSKILL Trident 2000 8-9-8-24 4GB
Gigabyte X58A-UD7 rev 2.0 with GOOC2010 bios
Intel 990X Gulftown CPU
Single Stage @ -50
Western Digital Velicoraptor 600GB SATA3
Windows 7 32bit SP1


Results:

CPU-Z Validation




I made a quick YouTube video of the setup.