Thursday 20 December 2012

AMD ULPS (Ultra Low Power State) disable tool

Anyone that has troubleshooted problems with AMD/ATI graphics card crossfire configurations will probably know about Ultra Low Power State (ULPS). ULPS is a sleep state that lowers the frequencies and voltages of non-primary cards in an attempt to save power. The downside of ULPS is that is can cause performance loss and some crossfire instability.

ULPS is controlled via a registry value named "EnableULPS" located under the "HKLM\SYSTEM\CurrentControlSet\Control\Video" key, it can be disabled by setting the value to 0 or enabled with 1. As there are often multiple EnableULPS values in the registry for different cards and even older graphics cards that were previously in the system, it can be a tedious task to manually make the changes.

This is a simple tool written in the AutoIT scripting language, it searches the HKLM\System key and changes all EnableULPS to either 0/1 as defined by user input.


You can download the binary here (or source if you are interested) from my Github.

Sunday 18 November 2012

GTLcontrol code update v2.0 online

You might have seen the control software I put up for Gigabyte Tweak Launcher (GTL) about a month back. At the competitive end of overclocking the ability to dynamically control the clock speed is priceless, it can be the difference between those last 10 points you need to take the world record. GTLcontrol provides this functionality to any motherboard that supports Gigabyte's GTL.

I have updated the code base to include a full GUI, hotkeys for bclk/multi up/down adjustments and I've also cleaned up the code to increase the performance a little.


The hotkeys F7/F8 are used for multi up/down and F10/F11 are now used for bclk up/down. This allows low end Gigabyte boards to be used just like the uber high end Z77X-UP7.

Also to support the GUI functionality is a new F5 hotkey to "show" the GUI if you have hidden it. I suggest turning the GUI off once you have your settings right, as then GTLcontrol will use less memory and impact your benchmark less. I have done extensive testing with it off and on and found no efficiency loss, but better to be safe than sorry. Please note that if you do turn the GUI off, you need to manually turn it back on by setting "gui=1" in the gtlcontrol.ini file.

Remember when you make any changes to save and then hit reload, this will activate the changes.

Feel free to take this code and do whatever you want with it, please email me any changes as I would like to check them out.

You can download the full version from here or the source code from my github.

Enjoy!

Wednesday 17 October 2012

How to upgrade SCCM 2007 clients to SCCM 2012

You would think upgrading from SCCM 2007 to 2012 would be a relatively easy task, but I've found it a mammoth process. Not only is there no way to "upgrade" to server itself, there is also no clean way to upgrade the clients directly from 2007 to 2012. I am not a big fan of the SCCM client "push" technique, so below is a process that is working for me.

I have put together the following kixtart/batch script combination to uninstall 2007 and then install 2012 in my environment. This might not work for everyone, but I use a combination of computer startup/login scripts and SCCM to carry out maintenance tasks in my environment.



The pre-launch script

Firstly I am calling the main batch script from my kixtart computer startup script. This could all be done in kixtart, but I want this to happen in the background. If I were to use commands like "start /wait" (which I use to ensure the SCCM 2007 client is installed before the 2012 install begins) within the kixtart script itself, this would make computer startup process take way too long.
;spawn SCCM client upgrade into background
IF (EXIST ("c:\windows\system32\ccm\") AND EXIST ("c:\windows\ccm\AAProv.dll")=0)
  SHELL "%comspec% /c start contoso/local\software\sccmclient2012\sccmupgrade.cmd"
endif
The above script simply checks that the SCCM 2007 folder "c:\windows\system32\ccm\" exists on the system and that a SCCM 2012 file "c:\windows\ccm\AAProv.dll" is not installed. It then launches the main upgrade script. You could use any file, I just picked AAProv.dll as it was one of the first files I saw in the directory.



The upgrade batch script

1. Create a new folder and copy the SCCM 2012 client and ccmclean.exe. CCMCLEAN has been floating around since SMS 2003 and while not officially supported, still works well to irradicate unwated SCCM 2007 client installs.

2. Create an empty batch script and enter the following. Obviously you need to replace the consolo.local lines with the relavent paths and MP fqdn in your environment.
REM ensure 2007 is installed properly (can prevent uninstall)
start /wait %windir%\system32\ccmsetup\ccmsetup.exe /logon
REM uninstall sccm 2007
start /wait %windir%\system32\ccmsetup\ccmsetup.exe /uninstall


REM ccm clean
start /wait
\\contoso.local\software\sccmclient2012\ccmclean.exe /all /q

REM 2012 client install
if not exist "c:\windows\ccm\AAprov.dll" (
  if not exist "c:\windows\system32\ccm\core\bin\clicore.exe" (
   
\\contoso.local\software\sccmclient2012\ccmsetup.exe /service SMSSITECODE=SDC SMSCACHESIZE=6144 SMSMP=mpfqdn.contoso.local /UsePKICert
  )
)
REM clean exit
exit 0
The above script works as follows. First I run an INSTALL (yes an install, not uninstall) of the 2007 client with the /logon flag, this will ensure that the 2007 client is installed properly, as an improper install can cause the uninstall to fail.

Next I attempt to do a clean uninstall with the native SCCM /uninstall command, this often fails, so it is followed up with the CCMCLEAN /all command which will remove anything that the native installer misses.

After the uninstall of the 2007 client, two checks are run to ensure the 2007 client is uninstalled and the 2012 client isn't already installed. Then the 2012 client is installed. I use "start /wait" commands to ensure the uninstall steps occur before the 2012 install starts.

I have used this script on a mixed environment of Windows XP SP3 32bit and Windows 7 SP1 32bit with no problems, 99.9% of clients uninstalled and upgraded clean.

This entire process probably won't suit your environment, but hopefully you can take some ideas and apply them to your deployment techniques.

Tuesday 2 October 2012

Gigabyte Tweak Launcher hotkey control tool

Overclockers everywhere rejoiced when Gigabyte released their Gigabyte Tweak Launcher (GTL) software that is capable of adjusting the multiplier, voltages and bus speeds. GTL is super lightweight and perfect for competitive overclocking but unfortunately is has one small downfall, it doesn't support hotkeys. Hotkeys can be an essential part of taking that world record, clocking a higher frequency for one part of a test, then lowering the frequency for a heavy CPU based test.

With the help of the lightweight autoit scripting language I have put together a small script capable of passing commands to GTL via hotkeys, my gtlcontrol script works as follows.



How to use GTLCONTROL

1. Download gtlcontrol.exe and gtlcontrol.ini from my github repositories.

2. Make sure Gigabyte Tweak Launcher is installed.

3. Edit gtlcontrol.ini to set your bclk, voltage and multiplier.

4. There are 3 options for each hotkey "multienable", "voltenable" and "bclkenable".

If you want to enable any of the 3 options set the value to 1, setting the value to 0 disables the respective option.

For example, multienable=1 enables the multiplier. multienable=0 disables the multiplier.

NOTE: Every time you edit gtlcontrol.ini you need to exit and reload gtlcontrol.exe.


5. To use the application, first start Gigabyte Tweak Launcher and then open gtlcontrol.exe. GTL must be running in the background for the hotkey functionality to work.

6. Your hotkeys f1 through to f4 will execute the options you set in gtlcontrol.ini.


NOTES:

The options in the [debug] sections are for debugging, you don't need to touch them.

You may want to disable the teamau splash screen, to do this set title=0

If you don't trust binaries, the source is provided in gtlcontrol.au3 (check out my github), it is written in the autoit scripting language.

Friday 7 September 2012

How to silently install Microsoft Mathematics 4.0

The installation options of Microsoft Maths 4.0 are very unusual. Its seemingly designed for use in schools yet it doesn't have the ability to be silent installed. The only option for network admins is a Citrix Xendesktop or Microsoft App-V style deployment.

Through trial, error and research I was able to find a set of silent installation commands that work.



The Process

1. Download Microsoft Math 4.0 x86 from here >> http://www.microsoft.com/en-au/download/details.aspx?id=15702

2. Extract the contents of MSetup_x86.exe to a folder. This can be easily done with winrar

3. Create an install.cmd with notepad++ or your favourite text editor

4. Into your install.cmd add the below deployment commands.
start /wait msiexec /i MSMath_x86.msi FROMSETUP=1 ALREADYRUNNING=0 DOTNET35=1 SKIPDXINSTALL=0 SXSOFF=0 D3DOFF=0 /qn

regedit /s eula.reg

exit 0
These commands are a combination of silent install, tricking the MSI into thinking its running from setup.exe and a selection of other options. I also apply a eula.reg (as seen below) that accepts the EULA and prevents it from popping up to the end user.

You may be able to further optimize the silent install string, but the above string works perfectly from my experience.

5. Save install.cmd

6. Paste the below text into a new file named eula.reg and save it into the same folder
Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Mathematics\Standalone\4.0]
"MsltAccepted"=dword:00000001
 7. Create your package and program with SCCM (or your favourite deployment technique) and trigger install.cmd


The most important part of this article are the silent install strings for MSMath_x86.msi, the other deployment options are up to you. This took me a while to find a solution to and it seems by a quick google search that many other administrators are in the same boat.

Monday 3 September 2012

Resolving iPhone wireless dropouts with DD-WRT

I use a couple of Linksys DD-WRT based routers in my home network and have done so for a few years. I love the flexability of advanced options like radius and the reliability I recieve from these devices.

I recently began experiencing some problems with iPhone connections being unreliable, especially after the router had an uptime of over 8 hours. A reboot of the router fixed the problem but rebooting the router every day isn't my idea of a resolution.


The settings that worked

These settings are a mix of settings from around the web, since combining these I have had perfect reliability with my iPhones.

1. Login to your DD-WRT device.

2. Click the 'Wireless' tab.

3. Click the 'Advanced Settings' tab.

4. Adjust the following settings.

Beacon Interval 50
Fragmentation Interval 2304
RTS Threshold 2305

The key setting is adjusting the Beacon Interval from the default of 100 to 50. The other two settings I couldn't see any differences but they didn't negatively impact performance so I left them.


5. Click 'Save' and then 'Apply Settings'.

I tested these settings on a Linksys WRT54G running v24 SP1 Micro and a Linksys WRT150N running v24 STD and there was no negative performance impact on my other devices (laptops, iPad, etc).

The only other settings I adjusted was TX Power to 84, but that was just to cover a larger area in my home and is totally optional.

Ensure you do your own before and after performance comparisons, but as far as I can tell these settings only bring gained stability.

Sunday 2 September 2012

Enterprise deployment of the Adobe Photoshop CS6 6.0.1 security update - CVE-2012-4170

There is no denying Adobe probably have the most convoluted enterprise processes in the industry but luckily the latest CS 6.0.1 security update deployment isn't too bad at all, if you know where to look that is.

The official Adobe blog post simply mentions going to Help > Update from within Photoshop, which is OK for updating a single client but doesn't suffice for enterprise upgrades.

This update probably isn't blog worthy, but I had trouble finding both the update itself and the silent install strings for the update, so I thought it might save someone else a few minutes.



Obtaining the update

The Adobe blog post doesn't mention the update can be downloaded in a redistributable format from the Adobe support downloads page - available here http://www.adobe.com/support/downloads/new.jsp

The above address is the location for all the latest Adobe network deployable updates/security fixes. You can also grab the Photoshop CS6 6.0.1 update for Windows directly from here http://www.adobe.com/support/downloads/detail.jsp?ftpID=5408



Silently installing the update

Again this is REALLY easy, but I saw a number of syntax errors on other blogs that complicated things.

1. Extract the Update zip file.

2. Create a install.cmd with the following text inside

start /wait AdobePatchInstaller.exe --mode=silent

Please note the "--mode=silent" has a small s, a capital S will cause this process to fail.


3. This next line is totally up to you, but I also deploy a blank file to the file system. This file allows me to easily evaluate the patch version of Photoshop when deploying future updates.

copy /Y adobecs6_13_0_1.tch c:\windows\deployment\adobecs6_13_0_1.tch

Your done! You can now deploy the install.cmd with SCCM or your favourite deployment tools. Hopefully these simple tips/references save you a few minutes.

Friday 17 August 2012

Techniques to avoid Citrix Xendesktop boot storms

In any environments running Citrix Xendesktop with a PVS configuration, sooner or later you are likely to come across a boot storm, a mini one at least.

A boot storm is essentially a denial of service. It occurs when multiple Xendesktop or Xenapp servers reboot simultaneously and use all the available resources (normally CPU) causing extreme slowness in the rest of the environment. In some cases this initial boot storm can flow through for the rest of the day as your virtual infrastructure never recovers from the initial resource demand.

As you could imagine in a educational environment this can be amplified as X number of users log off at the end of each lesson then expect to log in 5 minutes later when their next lesson starts.

An easy fix would be to simply disable any "reboot on logoff" functionality but that can have its own implications.

For example your PVS environment may use a write cache redirect to local storage, this gives improved performance as it is less reliant on network infrastructure but is usually smaller in size. If your system was not rebooting on every logoff there is increased potential for the write cache to become full and with Xendesktop 6 the write cache overflow is on the PVS HDD itself.



Battling the infamous boot storm without changing write cache settings
After analysing our environment we decided we wanted to disable "reboot on logoff" but defiantly wanted to keep our "Cache on device hard drive" write cache configuration.  These two don't really work together by default, but these few changes allowed us to make them work perfectly together.

Part 1 - Daily reboot

Firstly we configured a daily reboot, we found the easiest way to do this was through a combination of script and Citrix policy.

Through our Desktop Group properties we have configured our Power management schedule to slowly start turning machines off around 12AM, coming to a low of 0 between 3 and 4AM. Then machines started  turning on again, allowing us to peak back our at 100 machines at 7AM, ready for staff and students to login at 8AM. By slowly turning these machines on/off we ensure we don't trigger a boot storm.

To compliment our Desktop Group Power management configuration we also set a simple task schedule on the virtual desktops themselves at 3:45AM to trigger a reboot. At 3:45AM there should be no more than around 5 machines still running, ensuring we are only rebooting a few machines at this time. This might not suit all environments, but in ours there is never going to be anyone using Citrix at 3:45AM.

That ensures at least 1 reboot a day clears the write cache.


Part 2- Write cache evaluation on logoff

Part 2 is slightly more complex but just as important in ensuring the write cache has space available.

Using the kixstart scripting language and a logoff script, we run an evaluation on available write cache space. Depending on the outcome of that evaluation we trigger a reboot or simply just allow the system to log off.

This will ensure any systems that have below a specified threshold of available write cache will reboot and the rest will be immediately ready to serve the next user. I have attached the evaluation script below, written in the kixtart language.

writecachemonitor.kix

This script can be triggered by a simple windows domain logoff user script targeted to the virtual desktop OU with loopback.


These two simple techniques have worked wonders for us, not a single boot storm, our performance has noticeably increased and end-users are much happier.

Thursday 21 June 2012

Configuring Forefront UAG trunks to support Yubico YubiRadius OTP authentication

In the process of preparing some of my external services for Yubikey integration I have been faced with a few problems, integration with Forefront UAG is no exception.

Adding the YubiRadius radius server to UAG as an authentication server is rediculously easy. Open the desired trunk properties, go to the authentication tab, add a new radius authentication server and put in your server IP and secret key.

After spending all of 2 minutes configuring YubiRadius as an authentication provider for one of my existing trunks I attempted to login and was repeatedly met with a generic UAG "Access Denied" screen.

I jumped onto the YubiRadius box via SSH and restarted freeradius in foreground vebrose debug mode by starting it with freeradius -f -X. Freeradius gave me a vital clue, UAG was only passing the first 20 characters of the OTP to the YubiRadius server, so of course YubiRadius was replying to UAG with access denied.



Fixing the Issue

The problem occurs because by default UAG only allows 20 characters in the password field, any more than 20 are automatically truncated back to 20 before being passed to the authentication server. In most instances this would be fine, but for OTP's it simply doesn't work. Luckily for us, the fix is a piece of cake.

1. Log into your UAG box and open the following folder "%programfiles%\Microsoft Forefront Unified Access Gateway\von\InternalSite\samples"

2. Copy the customDefault.inc from the samples folder to "%programfiles%\Microsoft Forefront Unified Access Gateway\von\InternalSite\inc\CustomUpdate"

3. Edit the customDefault.inc and change the PasswordLimit field to 50 (or more if you are using a custom OTP length), as per below. You may even need to consider a length closer to 70 characters if you are using a shared field for Active Directory password and OTP.


4. Open a command prompt and issue an iisreset

Done like a dinner, your UAG server should now pass the full OTP token to Freeradius, to which it can properly validate if the token is authentic or not.

Wednesday 20 June 2012

How to run Yubico YubiRadius on Microsoft Hyper-V

For anyone that might have read my blog posts in the past you would know I am an advocate of Yubico Yubikeys and in particular their implementation with YubiRadius.

YubiRadius allows the system administrator to host an in-house Radius server (I was about to write Yadius) that is the missing link between Yubikeys and anything that can interface with Radius.

Unfortunately YubiRadius only comes in OVF and VMware formats, which leaves anyone with Hyper-V infrastructure in a hole, but luckily its quite easy to get it up and running on Hyper-V



The Conversion Process

1. Download YubiRadius VMWARE edition from here http://yubico.com/yubiradius-vm


2. Grab the VMDK2VHD converter, it easily converts VMDK files directly to VHD for use in Hyper-V. You can download it from here http://vmtoolkit.com/files/folders/converters/entry8.aspx


4. Open VMDK2VHD, it will prompt you for a VMDK file, point it towards the YubiRadius VMDK file you downloaded in step 1. Select an output location for your VHD file and start the process.


3. Once the VHD has been created jump onto your Hyper-V box and create a new virtual machine, give it the following attributes.

Memory: 1024MB (or more if you want)
Legacy Network Adapater
and assign your newly created VHD file to the IDE controller.

The rest of the settings are up to your personal preference.



4. Take a snapshot before you start, just in case you hose something in the setup process. Then boot your new Hyper-V YubiRadius server.


5. Login with the default credentials.
Username: root
password: yubico

Once logged in the GUI may not load correctly, it didn't for me. A simple ctrl+alt+f2 will re-direct you to a working terminal. From here you can use update-rc.d -f remove to remove services you don't want to run at boot, such as the GUI X11.


6. We need to setup the network adapter so we can login via SSH for future configuration. Enter the following commands at the command prompt.
cd /etc/network
nano interfaces
Below are some example settings you can change and then paste directly into the interfaces file.
# The primary network interface
allow-hotplug eth0
iface eth2 inet static
        address 192.168.1.100
        netmask 255.255.254.0
        network 192.168.1.0
        broadcast 192.168.2.255
        gateway 192.168.1.1
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 192.168.1.1
        dns-search domain.internal
Make sure you hash out the #auto eth0 line, or you may have problems booting.

Press ctrl+x to exit, and type Y to save the changes.


7.  Next we need to add at least one DNS server to /etc/resolv.conf to enable DNS resolution. You can change the below IP address to reflect your DNS server.
echo 192.168.1.1 >> /etc/resolv.conf

8. Finally you can issue a reboot with the below command for the settings to activate.
shutdown -r now

9. After the system has rebooted, you should be able to SSH in and access the Webmin interface via http://IP:10000/

The default username is yubikey and the password is yubico.



It might be a good idea to setup an IPTables firewall and disable as many unrequired services as possible, YubiRadius is fairly loose by default.

If you hose the system (it doesn't reboot after you change the network settings) you can go back to the snapshot you took before you started. Ensure your network configuration is correct and you hashed out the #auto eth0 line after changing the interfaces file.

Wednesday 16 May 2012

Modifying YubiRadius to authenticate only the OTP for use with Citrix Access Gateway

For those that aren't aware the Yubico Yubikeys are a fabulous product, a one time password token built with flexibility in mind. Don't want to use Yubico's cloud based authentication servers? No problems run your own, don't trust the Yubico AES keys that comes with the Yubikeys? No problems, add your own keys.

One initial drawback of the Yubikey was their limited use in the enterprise space, if you wanted to use them within the enterprise you needed to write your own authentication mechanisms to tie in with the Yubico API. Enter YubiRadius, a yubikey radius solution that ties together freeradius, apache and some custom php scripting to deliver combined LDAP/Yubikey authentication.

Not all enterprise applications are able to use two separate radius servers (LDAP radius then YubiRadius). With this in mind Yubico has addressed that by requiring the user to enter their password immediately followed by a OTP in the password field. This is a great solution and really opens up a number of new potential ways of how Yubikeys can be applied.

One of the biggest limitations of entering the password + OTP in one field is single sign on. Using a product like Citrix Access Gateway (CAG) that takes the initial username and password and passes them through to the Xendesktop/Xenapp instance sitting behind the CAG, this will never work. What CAG does support though is a dual password field, the first would be the normal LDAP password field that is used for the pass-through and the second field can be a radius server.

As you could imagine users wouldn't be impressed knowing they have to type their password into the LDAP password field, then again into the YubiRadius password field and ALSO insert their Yubikey for OTP. The solution for this is using YubiRadius is OTP only mode, this way it purely focuses on validating the OTP against the LDAP username.

Unfortunately out of the box YubiRadius doesn't support OTP only authentication, but with a few modifications to the /var/www/wsapi/ropverify.php file we are able to gain the desired results without breaking any password + OTP logins that may occur.

The below modifications change the ropverify.php file to first check the password length. If the password length is 44 characters then there is no password present (YubiID+OTP is 44 characters), resultantly LDAP password verification is skipped. If it is not 44 characters in length the password field is treated as per default, both LDAP password and OTP are verified independently.

If you have manually changed your Yubikey keys and your YubiID+OTP result in a string longer than 44 characters in length, you will need to make a modification to my code changes to reflect that.

While this works great in my environment, I would suggested if you want the highest level of security then you should use multiple instances of YubiRadius, one of OTP only authentication and another for LDAP password + OTP authentication.



Step by Step

NOTE: This was done on version 3.5.1 of YubiRadius, use it at your own risk. 

1. Jump onto your YubiRadius box either via the console or SSH and su to root.

2. Navigate into the /var/www/wsapi folder
cd /var/www/wsapi

This is where the ropverify.php file is. FreeRadius sends a request to ropverify.php and ropverify verifies that both the LDAP password and the OTP line up with the LDAP username.

3. Download my patch file from pastebin, a wget should do the job. Name the downloaded file ropverify.patch
http://pastebin.com/raw.php?i=K8U95nx5

4. Take a backup of your ropverify.php in case something goes wrong.
cp ropverify.php ropverify.php.bak

5. Issue the following command to patch the existing ropverify.php
patch ropverify.php < ropverify.patch

The file should now be patched and ready to go, you still need to make one change to enable OTP only logins.

6. Edit the ropverify.php file
nano ropverify.php

7. Find the line that reads:
$otpOnlyAuthAllow = 0; //change to enable OTP only

Change the 0 to 1, and if you want to disable it, change it back to 0.


It really is as easy as that. Now you can have users login with only the OTP or with both the password and OTP in the password field.

Please keep in mind you should disable auto-provisioning if you are using this script. If auto-provisioning is enabled, anyone with a Yubikey can potentially associate themselves with an existing LDAP account and bypass your two-factor authentication.


Friday 11 May 2012

Exchange 2010 OWA users report a blank screen after first login

I recently provisioned a number of Exchange 2010 e-mail accounts as part of a mail roll-out project I was undertaking. The owners of these accounts prodominatly access their accounts via outlook web acces (OWA) using forms based authentication, both via a local instance and another instance sitting behind Forefront UAG.

Provisioning the accounts was the easy part.


The Problem

As soon as I started handing these accounts out, users were reporting the inability to get past the select language screen. Users were able to log in without issue, but after selecting an appropriate language and time-zone they simply recieved a blank screen. Anyone who was persistent enough to try 3 times in a row eventually did get in, and after they were in once, they didn't have any more problems.

After replicating the issue and confirming it was in fact happening, I checked eventvwr, exchange logs and IIS logs but I was unable to pinpoint what was causing the problem. In fact the logs had nothing unusual at all in them.

I was able to determine the blank screen was when the web browser was attempting to access the following URL: https://OWAURL/owa/lang.owa


The Fix

As the problem seemed to disappear after the user entered and accepted the time-zone and language data 3 times, I thought I would try forcing the local ID onto the OWA instances in question.

Voila, after executing the following command into an Exchange powershell console, users no reported any issues.

Set-OWAVirtualDirectory "owa (Default Web Site)" -DefaultClientLanguage <Locale ID>

You can use this table provided by Microsoft to search for your Locale ID.

If you are using a custom OWA instance with a different name, you can use the following command to get a list of all the OWA instances on your server.

Get-OwaVirtualDirectory

Hopefully you are smart enough to do this BEFORE you start adding accounts and not when users start having problems.

Thursday 5 April 2012

iPhone native twitter app is not displaying mentions under the @connect tab

Recently my twitter stopped updating mentions under the @connect tab in the default iPhone twitter application. Initially I thought it was just me but after a quick google search I found a number of users were experiencing the same problem.

After first trying to restart and then playing with a number of settings I was unable to resolve the problem, but the following process worked.


The Fix

1. Open the Settings menu.

2. Select Twitter.

3. Click on the account that is causing you a problem, in my case it is "@teamau"

4. Click "Delete Account". This doesn't delete your twitter account, but just clears the account from your iPhone.


5. Click "Add Account" and re-add the account you just deleted.

Voila, your @mentions should be working again!

Monday 19 March 2012

Sharepoint error "The security token username and password could not be validated" during claims based authentication configuration

There are many great tutorials on configuring Sharepoint 2010 (SP) to use claims based authentication such as this one by SharepointChick. Claims based authentication is a handy way to pair SP with a lightweight LDAP directory.

The configuration is reasonably complex and determining where an error lies is a balancing act between windows event viewer, LDAP logs and ULS logs.

One such error I received during my SP and OpenLDAP configuration was an event viewer application log.

Source: SharePoint Foundation
Event ID: 8306
Task: Claims Authentication
Error: An exception occured when trying to issue security token: The security token username and password could not be validated..

This error is accompanied with the inability to login to your SP site. A quick google search turned up a number of users in the same boat and plenty of suggestions but no definitive answers.




Clumsy Configuration

When you are working from configuration samples and writing large portions of text based configuration there is always the possibility of data entry errors.

Any of the tutorials you follow, such as this one, will ask you to create a new web application, select claims based authentication and then do some manual configuration in some of the web.config files on your SP site.

This is where the doubt creeps in, during the configuration of a membership provider, depending on the tutorial you are reading, you may be asked to configure a "ASP.net membership provider" with  "useDNAttribute" or "userDNAttribute" or both.

As soon as I removed "userDNAttribute" and set "useDNAttribute" to false, everything worked for me and the security token validation error message disappeared. I am not sure if this is because I am using OpenLDAP or if that makes any difference at all.

The below configuration is an example of my working membership provider syntax.
<membership>
<providers>
      <add name="LdapMember"
         type="Microsoft.Office.Server.Security.LdapMembershipProvider,
 Microsoft.Office.Server, Version=14.0.0.0, Culture=neutral,
 PublicKeyToken=71e9bce111e9429c"
         server="192.168.1.1
         port="389"
         useSSL="false"
         useDNAttribute="false"
         userNameAttribute="cn"
         userContainer="ou=users,dc=contoso,dc=local"
         userObjectClass="inetOrgPerson"
         userFilter="(ObjectClass=inetOrgPerson)"
         scope="Subtree"
         connectionUsername="Cn=sharepointuser,dc=contoso,dc=local"
         connectionPassword="password"
         otherRequiredUserAttributes="title,sn,cn,mail,description"/>
   </providers>
</membership>

Looking back retrospectively it seems like an obvious and easy to fix problem, but when there are so many variables at play it isn't always so easy.

Sunday 11 March 2012

Sophos Enterprise Console 5 displays clients as "Awaiting policy transfer" after upgrade

I am yet to have a smooth Sophos Enterprise Console (EC) upgrade, there is always some certificate, configuration or downright weird issue. This time after upgrading from 4.7 to 5.0 everything seemed perfect, I should of known that was too good to be true.

After a policy change didn't find its want to my endpoint's I did some digging in EC and found nearly all of my endpoints were hanging at "Awaiting policy transfer". The only clients that were the "Same as policy" had been rebuilt since the EC upgrade took place.

Immediately I though of the dreaded Sophos certificate problem but further investigation ruled out this theory, fortunately the resolution was much easier.



Please update my policy changes!

1. Fire up EC 5

2. Right click any computer that is turned on but still "awaiting policy transfer", then select "View Computer Details"

3. Here you find the the status of all the policies on the selected client. For example "Anti-Virus and HIPS Policy", "Updating Policy" and "Application control policy".

Take note of all the policies that are "awaiting policy transfer", these are the ones we will need to fix.

4. The fix is ridiculously easy, edit one of the policies that are "awaiting policy transfer" and change one option. After changing the option, change it back to your original setting then press OK. Repeat for all policies hanging at "Awaiting policy transfer".

Huh? Hold on, I didn't change anything right? All I did was check an option then un-check it. Correct! But what I did do was trigger a policy update of old EC 4.7 policies. I am not sure if this changes some underlying configuration or perhaps updates an out of date check sum, regardless of what is happening behind the scenes it resolves my problem.

In the below image I opened my "Tamper Protection Policy" which was "awaiting policy transfer". I then checked "enabled tamper protection", then immediately unchecked it and clicked OK. Shortly after my clients begin receiving the updated policy.
 

Savour this fix, it's the easiest Sophos resolution you will ever get.

Thursday 1 March 2012

Querying the Sharepoint User Profile Service with Javascript and a content editor web part

So you've gone to the trouble of setting up the Sharepoint User Profile Service (UPS), mapping the attributes you want to use in Sharepoint and possibly troubleshooting some of UPS problems, what's next?

More than likely you want to start using some of these attributes and one of the easiest and most flexible ways to do this is with some Javascript and a content editor web part.



Some things you need to know

The content editor web part is a funny one, and acts differently depending on how you use it. Firstly you can paste Javascript directly into the content editor, but it is much easier to host your script in a .txt file and use the content link functionality to reference it. This gives you much more flexibility in editing, you can use your editor of your choice, instead of the flaky, awkward Sharepoint editor.

While waiting for the UPS XML query to complete there is a slight delay, therefore a simply Javascript document.write operation won't work for the response of the query. Instead we need to use Javascript ID tags in conjunction with syndication to insert the data dynamically into predefined areas of your script.

The actual request is a SOAP XML request, to which I need to credit this thread for the base code.



Enough rambling, how do I do it?

I am assuming you already have the User Profile Service configured and any attributes you want to import ready to go, configuration of UPS is outside the scope of this tutorial.

The below code gives an example of how I generated a Sharepoint entry page. My requirements were to pull a Student ID number from active directory that I could then use to generate URL's and pull data from external databases.

Click HERE for example code

In this simple example I am querying the UPS for a Student ID (stored in the Active Directory department attribute) and the users "Title" to create a welcome page for my Parent Portal.

Example dynamically generated content

Example dynamically generated link with "Student ID"


Most of the code is self explainatory but there are a few changes you will need to make to get started.

1. Please replace the sharepoint.contoso.local in the below line with the address of your Sharepoint UPS.
xmlHttpReq.open('POST', 'http://sharepoint.contoso.local/_vti_bin/UserProfileService.asmx', true);

2. I am pulling the Student ID, stored in the AD "department" field. If you change department in the below line, you can pull any attribute you want, providing you have configured the UPS to poll that attribute.
GetUserPropertyByAccountName(loginName, "department");
The returned value of this attribute will be set as "responsestuid" to which you can manipulate any way you like.


3.  At the top of the file I have multiple Javascript ID's, welcomeheader, linksheader, links. You can create as many IDs as you need and apply HTML to them such as font size and color. You can also place them in the other in which you want them to appear on the page.

For example because I want my "linksheader" above my "links", I position them as follows.
<font size = "4">
<script type="text/javascript" id="linksheader" src="syndication.js">
</script>
</font>

<font size = "2">
<script type="text/javascript" id="links" src="syndication.js">
</script>
</font>
4. Once you have made all the required changes to the example code, save it as "javascript.txt" and either upload it to a Sharepoint Library or any web server.
a. Create a new content editor web part.
b. Edit the web part.
c. Put the link to your "javascript.txt" in the "Content Link" box and press save.

With any luck you will being pulling attributes from UPS and creating dynamic pages based on those attributes!

If you are having troubles getting it working use an alert('error'), and move it further and further down the javascript.txt until it now longer appears when you load your page. This will give you a strong indication of where the error (normally a syntax error) might be in the code.

Wednesday 22 February 2012

Interactive multi touch, multi projector flexible learning space

You may have read my article written in November 2011 entitled "Building an interactive multi touch surface on the cheap using a Wiimote" in which I discussed the technical build details on how to design your own multi touch surface.



Build Requirements

With the technical design out of the way the next step is determining how I could successfully integrate a multi touch surface into a school environment. After playing with some ideas on how to implement it successfully in a school environment we came up with the following requirements.
  • Create an interactive learning space with 2 projectors, one traditional wall facing projector and one interactive table projector.
  • It has to be easy and work consistently. In my experience if the system doesn't' "just work" then users aren't interested in attempting to use it.
  • The room should remain a flexible space.
With these requirements in mind I decided it would be best to roof mount the projectors and Wiimote, this would ensure the tables could be moved if required. This also means I need to run a power source to the Wiimote with a switch to turn the power on and off.

I am going to utilize the WiiSync autoit script I previously wrote to enable users to easily connect the Wiimote via bluetooth to the computer. If you need the WiiSync script or any other technical details please see this blog post.



The Build

During the 2011 Christmas school holidays I put aside a few days for the installation, I had the power points installed by Electricians and the remainder of the installation was done in house.

We decided to reuse a decommissioned projector, previously ceiling mounted in another room, this projector will be used for table projection. Additionally we purchased a new projector, roof mount and speakers to power the wall projector.

You can click on any of the below images to enlarge them.

 The Wiimote power cable, both projector VGA cables and the audio cable are terminated here.

The wall plates installed, the on/off switch for the Wiimote is visible on top.

The Wiimote being wired with power.

The Wiimote and table projector ceiling termination. 

Ceiling mounted wall projector.

 Final installation of table projector and Wiimote.

Speaker installation.

Other installation information:
  • A 4-way video splitter is used for video output to the monitor, wall projector and table projector.
  • The pens we purchased are the IR Sabre, available for purchase online.
  • An inexpensive bluetooth adapter is connected to the computer.
  • A 5v power adapter and switch are used to power the Wiimote.

Build costs:
  • Wall projector and mount - $1500
  • Table projector - free (reused old decommissioned projector)
  • Table projector mount - $150
  • Wiimote - $50
  • Wiimote power adapter - $30
  • Switches and electronics - $20
  • Wall conduit - $50
  • Infrared pens - $36
  • Bluetooth adapter - $6
  • 4-way VGA splitter - $99 
  • Speakers - $200
  • Misc parts - $20
Total build cost:  $2161

Considering I was going to replace the wall facing projector in this space regardless, this total cost of the projector is very reasonable. Similar products available commercial such as the SMART table cost $7000+.



The room in action

Finally after hours of testing, planning and 2 days of installation the multi touch, multi projector system is complete. Click the You tube video below for a walk through.



This system has only just gone live in our environment but I am very happy with the implementation. I will report back in a few months time with a progress report on how the users are finding the system. After all if it doesn't get used or doesn't work consistently then it doesn't matter how cool great I think it is.

Uncertainty when changing to RAID 1 or RAID 10 on IBM DS3512 storage system

Big packages full of new equipment are every IT guys dream. It just so happened a while back I received boxes and boxes of hard drives to expand some LUN's on my IBM DS3512.

There is a fun side to new equipment, unboxing it, installing it and seeing how much better it is than your previous gear. On the other side of the equation is the outright scary side, when you finally press that "expand" or "change raid level" button that activates an irreversible operation on your production equipment. No matter how much preparation and testing you do, there is always some anxiety associated with making big changes.

During the Christmas holidays I planned to upgrade an existing RAID 5 array to RAID 10, this will increase the IOPS through the roof. As always I read the documentation, ran through the scenario on paper and then on a test LUN. What I couldn't work out was what would happen when I finally pressed the confirm button to change the raid level from 5 to "1 or 10".

The IBM storage manager offers the option as follows:

Change > RAID Level > 1 or 10...


This caused me some concern, while all the other raid levels were a single option (0, 3, 5, 6), 1 and 10 were bundled together as a single option. A logical person would say RAID 1 is a mirror of 2 drives, so if your raid array has an even number of drives that is 4 or more, then RAID 10 will automatically be selected. Unfortunately I am not willing to take that risk, what about obscure propriety raid levels such as 1E that might support RAID 1 in a multiple drive configuration.

After searching the IBM redbook, the user manuals and the DS System Storage manual I was unable to find a solution. IBM also wouldn't provide me an answer, telling me the issue was a configuration issue and not a hardware issue. I am sorry IBM, but the technical capabilities of a product are not a configuration issue. I wasn't asking you to configure the product for me, just what the RAID levels of your product are capable of.



Don't worry, the DS3412 only supports RAID1 with 2 drives

Finally I was able to find the answer from the local services provider that I purchased the DS3512 from. The IBM DS3412 only supports RAID 1 with 2 drives, that means:

If { you select change raid level to 1 or 10  } AND
{ The number of drives in your array is 4 or more } AND
{ The number of drives in your array is even } THEN
  It will become a RAID 10
}

So a logical approach would have given me the correct answer, but anyone working in IT knows, relying on logic often results in failure.

Tuesday 14 February 2012

Accessing Twain and WIA devices via Citrix Xendesktop

In an environment such as a school there is a need to be much more flexible than a corporate entity. There is a need for more relaxed permissions, multiple users on single computers and a high level of customization.

This flexibility extends itself to the VDI farms, specifically with the ability to plug devices such as scanners into any virtual desktop and have instant access.

I was recently tasked with setting up a legacy Canon "CanoScan" device. When this device was originally released it only had a TWAIN driver but in the past 12 months a WIA driver has also been made available.



TWAIN Pain

TWAIN is now quite legacy, in fact since Photoshop CS4 Adobe has been trying to remove it from their products, offering compatibility with an optional plug-in. Adobe explains their move to remove TWAIN support from Photoshop is "Because TWAIN is an older technology that is not regularly updated for new operating systems, TWAIN often causes issues in Photoshop."

I didn't have any luck with simply passing the scanner through as a USB device and using the TWAIN driver within the VDI, applications simply didn't detect the scanner.

Fortunately for Xendesktop users Citrix has made available a TWAIN redirection option in XD 5.5. This option can be difficult to get working, depending on your Citrix client receiver version, the driver for your scanner and in which direction the wind is blowing on the day.

My advice to you is avoid TWAIN with Xendesktop if possible, try and find a WIA driver. If Twain is unavoidable you can try a few things to get it working smoothly.
  • Ensure XD is at 5.5 or higher
  • Ensure the HDX user policy "Client TWAIN device redirection" is enabled
  • Upgrade client online plugin versions to receiver 3.0+
  • Ensure you are running the latest Twain driver available
Some users have also reported having success with disabling "Client USB device redirection" all together or alternatively using the HDX user policy "Client USB device redirection rules" to deny the re-direction of just the scanner as a USB device.

For example your could add the following deny rule if your VENDOR id is 1234 and PRODUCT id is 9876.

Deny: VID=1234 PID=9876


This will block your scanner from being passed through as a USB device, allowing the TWAIN device redirection policy kicks in. I don't know on a technically level how the TWAIN device redirection works. The device doesn't appear in the device manager and no local driver is required but when you launch your scanning application the scanner works perfectly.



WIA Plea-se

Okay that was a horrible rhyming attempt, but WIA isn't so horrible to configure with Xendesktop. Actually in my case it was the exact opposite of TWAIN.

Simply grab the latest WIA driver, install it onto your virtual image, wait for your virtual desktops to update to the latest vDisk version and insert your USB scanner. Simple!

The scanner will be passed through as a USB device. pending you have USB redirection enabled or at least an "ALLOW:" rule for the scanner under the "USB redirection rules" user HDX policy. When it is inserted it will appear as a normal USB device, the driver you previously installed will kick in and then applications will detect it as a native WIA scanner.

If at all possible, go straight to WIA and save yourself some grey hairs. If WIA isn't a possibility, grab a triple espresso, put aside a few hours and start playing with settings combinations.

Monday 13 February 2012

Spanning tapes with DPM 2010

Regardless of how great disk back is, most administrators will want to do some regular off-site bound backups to tape and often the data can be larger than a single tape. Enter tape spanning, the ability to span a single back job over multiple tapes. This isn't best practice, but if you have a single 2TB resource and only a maximum capacity of 1.6TB per tape (LTO4) you don't have many options. Either you can purchase an expensive new tape drive with higher capacity or enable spanning.

Initially I was a little taken back by the absense of any tape spanning options in the DPM 2010 GUI, in fact even the official manuals have very little reference to how tape spanning can be done.

After some digging I found that spanning will in fact kick in by default if the protection group is larger than a single tape, but DPM will only wait 1 hour for a replacement tape until the job fails. This is more than enough time for a tape loader, but if your using a manual LTO drive this could be a problem.



The Solution


The solution lies in a simple registry key.

“HKLM\Software\Microsoft\Microsoft Data Protection Manager\1.0\Prompting”

Under this key is a REG_DWORD value named "PromptingTimeOut"

The value is in milliseconds, so for each hour you want to wait, you need to multiply your value by 3600000.

For example, 4 hours * 3600000 = 14400000

Be sure you enter the value as a decimal or your wait time might be totally different than what you were hoping.

Thursday 9 February 2012

Customizing the UAG SP1 logon page

Microsoft Forefront UAG is a great product for adding a bit more security to the publishing of internal websites. The ability to screen the login process, apply some basic IDS and NAC is very handy indeed.

Users wanting to take UAG to the next level might consider customizing their login landing page, to give a more corporate feel to their external sites. Olivier Detilleux published a great tutorial explaining how this process works but unfortunately from a number of Technet posts with users asking questions articles its evident some of the detail in Olivier's article is lost on some users.



How does the customization work? 

Although the process changed with SP1, it is probably easier now than it was before.

1. Navigate to "C:\program files\Microsoft Forefront Unified Access Gateway\von\InternalSite"

2. Create your custom headertopr.gif and place it in "C:\program files\Microsoft Forefront Unified Access Gateway\von\InternalSite\Images\CustomUpdate"

3. Copy  "C:\program files\Microsoft Forefront Unified Access Gateway\von\InternalSite\inc\logo.inc" to "C:\program files\Microsoft Forefront Unified Access Gateway\von\InternalSite\inc\CustomUpdate\logo.inc"

4. Rename "C:\program files\Microsoft Forefront Unified Access Gateway\von\InternalSite\inc\CustomUpdate\logo.inc" to "C:\program files\Microsoft Forefront Unified Access Gateway\von\InternalSite\inc\CustomUpdate\<trunkname><issecure 0 or 1>logo.inc"

For example if your trunk is called "OWA" and it is a https trunk, then your custom logo.inc would be called "OWA1logo.inc" with the 1 indicating https, 0 is used for http.

5. Now you need to remove the "if" scripting at the top of this file. This scripting is used in the original logo.inc to detect your custom "<trunkname><issecure 0 or 1>logo.inc" and if it is used in a custom logo.inc it will cause an error message.

To do this from the top of your custom logo.inc please remove
<%'include file for title
' xxxxxxxxxxxxxxxxxxxxxxx DO NOT EDIT THIS FILE xxxxxxxxxxxxxxxxxxxxxxxx
' A.O.detectionDOSFix - Store include file names in Application and not in Session.
if Application(g_site_name&g_secure&LOGO_INC) <> FILE_NOT_EXIST then
    include Application(g_site_name&g_secure&LOGO_INC)
else%>
 
and from the bottom of the file remove 
<%end if%> 

6. Then you can make the customizations to your custom logo.inc, such as inserting your own header image as per Olivier's tutorial.

Sunday 5 February 2012

Deploying custom Microsoft Word 2010 registry settings at logon

In december I added a blog post entited "Deploying custom Microsoft Word 2007 registry settings at logon" in which I detailed how to select, export and apply custom advanced settings for Word 2007. Since then I have moved to Office 2010 in my enviroment and to my surprise the process has changed, there are a few more steps involved in the process.



I am applying my exported settings but they arn't working

You not alone, it took me a while to work out what was going on here.

In Word 2007 the process was to open Word, set the custom advanced settings you wanted (such as picture in front of text), close word, jump into the registry and export the [HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Word\Data\Settings] value. Then you could subsequently import it during the logon process with a logon script.

The first part of the process remains the same, but  began havinb problems when I applied the .reg file during logon. When I opened Word the settings were not applied, yet if I re-ran the patch it worked, the settings applied perfectly.
After 15 minutes of chasing my tail, I finally realized that the issue was Word needed to be started before the .reg file could be applied. This is caused by Word writing a number of registry values (and overriding the above value) when it starts for the first time per user account. So if you apply the settings before Word is launched they are resultantly wiped on the first launch.



The Resolution
The fix is quite easy, but it did take some messing around with settings combinations to work out which settings I needed and which I didn't. Obviously I don't want to export the whole Office 2010 HKCU  key as there is some imformation regarding licensing, user names, etc, that I don't want to apply to every account.

The following base settings need to be applied, they stop Word from overriding your customizations.



[HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Common\General]

"ShownOptIn"=dword:00000001
"FirstRunTime"=dword:0151d1bc
[HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Common\Migration]
[HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Common\Migration\Office]
[HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Common\Migration\Word]
[HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Word\Options]
"FirstRun"=dword:00000000
"BkgrndPag"=dword:00000001
"ATUserAdded"=dword:00000001


Furthermore to the above base settings you need you apply your customizations. For example, if you wanted to apply word advanced options such as "insert/paste picture as: in front of text", you would create a .reg file with both the above settings AND the exported [HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Word\Data\Settings] key.

For "insert/paste picture as: in front of text" option you also need the following value.

[HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Word\Options]

"InsertFloating"=dword:00000001


Then you can safely apply that .reg file using a logon script and regardless if a user has started Word 2010 before or not, the settings will be applied

Certain advanced settings have additional registry values that also need applying, such as the "insert/paste picture as: in front of text" example above. If you have other settings you want to apply that arn't working when you export [HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Word\Data\Settings] I would recommend getting regshot and comparing before and after snapshots of the "HKCU\Software\Microsoft\Office\14.0 key or using a tool like procmon for live registry monitoring.

Changing locked update settings on Sophos Endpoint Security and Control 9.7

When debugging Sophos updating configurations at times you may need to regularly change the primary or secondary update settings. This can be done via the Enterprise Console but this can be slow and messy.

By default these settings are locked when you apply them via policy, restricting even administrators from manually changing them for testing.

Fortunately Sophos have included a mechanism allowing a local administrator to change these settings on an as required basis.



Unlocking the update fields

1) Open explorer to C:\ProgramData\Sophos\AutoUpdate\Config\

2) Open iconn.cfg in your favourite text editor

3) If you want to edit the primary update location look for the following heading.
[PPI.WebConfig_Primary]

4) Under the [PPI.WebConfig_Primary] heading there is a field named.
AllowLocalConfig = 0

Allowing local update changes is as simple as changing that field from 0 to 1. Easy as that, oh and Sophos, please allow https based updating soon, we NEED it!

Saturday 4 February 2012

Using Cisco Spectrum Expert and Backtrack to identify wireless anomolies

Cisco wireless controllers are great products, they allow the administrator to manage the entire wireless farm from a single console, making bulk changes and problem solving as required. Cisco recently released the clean air access points that take troubleshooting to a new level without the need for expensive spectrum analyser cards.

Recently I began receiving e-mail alerts from the wireless controller complaining of an "WiFi Invalid Channel", the exact error message was.
WCS has detected one or more alarms of category Security and severity Critical in Virtual Domain root for the following items:

Security-risk Interferer 'WiFi Invalid Channel' is detected. (2 times)

E-mail will be suppressed up to 30 minutes for these alarms.
Every 30 minutes the error would repeat, over and over again. The strange thing for this particular error was it would still alert in the middle of the night, excluding most business devices and devices like microwaves. While this wasn't affecting the clean air rating of the AP dramatically, it was continually triggering alerts and being flagged as a security issue.

After sending out an e-mail to key staff asking if any new wireless based equipment was installed recently and receiving no response I broke out the Cisco Spectrum Expert.



Detecting the problem

Fortunately the Cisco 3500i series clean air access points can be used in conjunction with Cisco Spectrum Expert software to troubleshoot issues such as this.

To do this you need to head over to Cisco.com and grab a copy of Cisco Spectrum Expert, it wasn't available in my download portal, but a quick email to Cisco resolved that.

Your AP can't service clients for the duration of the Spectrum Expert usage, so plan to do this after hours when your users won't be impacted.

After setting your AP to se-connect mode, either from the wireless controller or by connecting directly to the AP console, you can point Spectrum Expert at the AP and start analysing the results.

As soon as I fired up Spectrum Expert I was presented with the "WiFi Invalid Channel". While there is a great deal of detail, nothing definitively helped me identify what the problem device is. I tried searching in google for the exact frequency of the device but wasn't able to dig up any results.

One useful piece of information is the dBm (signal strength), that at -90.7 suggested the problem device in question was some distance from AP performing the analysis.


Where is it?

One question leads to another, I don't know what this device is, but can I find it? For this I fall back to a trusty laptop, my Alfa 500mw USB wireless adapter (RTL8187 chipset) and of course Backtrack 5 R1.

I decided to use a tool I have rarely used in the past, ssidsniff, which as its name suggests is normally used for uncovering hidden SSID's. Ssidsniff was chosen purely because I found it easier to view the BSSID and signal strength than in airodump (where BSSID's were jumping all over the screen based on the AP's current signal strength).

I quickly identified 00:00:00:00:00:00 as the problem BSSID, ssidsniff flagged it as "no identifiable channel" and "network only contains hosts" indicated by the H flag. While the valid AP's in my environment displayed as being "WPA/WPA2 capable". It may be totally different based on the device causing the problem, but it was extremely easy to identify this device as an anomaly compared to the rest of the devices.


My tracking process went as follows,

1) Starting right below the AP that originally detected the wifi invalid signal, I started ssidsniff and measured the dBm of  00:00:00:00:00:00.

2) I moved 5-10 metres in any direction then remeasure. I ctrl+c to kill ssidsniff and re-launch it every time I move to get the latest dBm. If the signal is getting stronger (which would be indicated by the dBm getting closer to 0, for example -25 is a stronger signal than -70) I keep moving in that direction, otherwise I change direction.

3) Repeat the above process until you find the highest signal strength you can, then look around.

Within about 5 minutes I had a dBm of -20, I found myself right next to a wireless microphone receiver, which funnily enough was turned on. After switching the receiver off and checking spectrum expert the invalid WiFi channel was gone, problem solved! You can then either suppress the error or replace the at fault equipment.

I am sure there are more technically amazing ways to accomplish this task but an inexpensive WiFi adapter and Backtrack was able to solve this problem perfectly.