tag:blogger.com,1999:blog-81273069233141681052024-03-13T06:35:39.956+00:00/var/blog/messagesJumping out of every window of opportunity given.Anonymoushttp://www.blogger.com/profile/16449496400214275214noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-8127306923314168105.post-79056453514116537032013-03-07T12:40:00.000+00:002013-03-07T14:09:52.377+00:00A bit of Cardiff business advice - Data security and going mobileI've just come back from a set of two really useful talks, the last of a series of '<a href="http://www.capitallaw.co.uk/site/about/googlewednesdays.html#">Google Wednesdays</a>' workshops aimed at helping businesses cope with the changing demands of the digital world, the opportunity for which was kindly provided by <a href="http://cardiffstart.com/">Cardiff Start</a>.<br />
<br />
The first talk entitled 'Lock up your valuables' by Wayne Beynon of <a href="http://www.capitallaw.co.uk/">Capital Law</a> covered data protection and IP issues, who started by pointing out that virtually every business stores personal information in some way - even if it is just CVs of potential employees - the handling of which comes under the auspices of the Data Protection Act, and he continued to explain the obligations of the business in handling the data; The majority of the points were reasonable and fairly obvious, but one that I found had surprising implications regarded storing the data outside of the '<a href="http://www.wellcome.ac.uk/funding/biomedical-science/application-information/wtd004113.htm">EEA territories</a>' - it's not allowed, which means that if you're using cloud hosting of some type (e.g. google docs) for these data, then you could be technically in breach of the Act.<br />
<br />
The next part of Wayne's talk involved handling commercially sensitive information, and the ways by which it can be stolen - there's of course the more obvious external cyber-attack route, and also, (probably more pertinent to Wayne's profession!) theft by employees and business partners. Here he recommended various security methods from restricting access to physical hard-copy, information systems and logging accesses to the more draconian use of computer forensics experts when things really go awry. It was also pressed home the benefits of keeping your employees trained on your data security policies. Finally he finished off with a quick tour of trademarks and copyrights.<br />
<br />
Up next was Liam Giles of <a href="http://www.spindogs.co.uk/">Spindogs</a>, with a great introduction to the considerations of choosing a mobile app over a mobile-friendly website. The gist of which was that to justify an app, you'll need a good reason to benefit from the extra effort of supporting several different platforms (iOS/Android/etc), the effort of app approvals and less responsive updates. Such reasons could be that you'll need to use some functionality of the phone, such as the GPS, accelerometer or local storage; or that the app paradigm fits better with your chosen marketing path. He then went on to show off how the <a href="http://www.spindogs.co.uk/">Spindogs</a> webpage responds to different form factors, and handles gracefully the transformation from big to small by resizing the browser window displaying it (go on, have a go yourself!).<br />
<br />
To conclude, Liam presented a plethora of slides provided by Google showing how mobile use is growing for many reasons, and that everyone should pay more attention to their use - 20-25% of users accessing sites via mobile devices should be quite a convincing argument.<br />
<br />
And, to show that I was paying attention:<br />
© Mark Einon 2013.Anonymoushttp://www.blogger.com/profile/16449496400214275214noreply@blogger.com1tag:blogger.com,1999:blog-8127306923314168105.post-87674761264733955412012-05-30T20:42:00.004+01:002012-06-05T22:08:31.885+01:00Set up your Raspberry Pi like a pro<div>In my day job, I've developed on several Linux based embedded platforms and without exception, they have all been set up for general development along the same lines - a physical serial connection / terminal and a root filesystem accessed via an exported NFS directory. Also the Raspberry Pi, unlike most embedded systems I have used, has the added bonus of an X server - so you can also export the desktop via VNC. Following these instructions, I've managed to get the physical connections on the Pi board down to just a USB power lead (connected to a PC laptop) and a wifi dongle or ethernet cable, making it far more portable - Play with a RasPi on the train!<br />
<br />
I'm using Fedora Linux (16/17) as a host, and I think you should too :) However, the instructions are almost identical for other flavours of Linux (Ubuntu, Debian, Mint etc), and should be nearly the same for Mac. If you're using any Microsoft based OS's, then you'll have to dig a bit deeper elsewhere - or preferably install Linux (hey, it's free!), maybe even use a 'live' Linux distro.<br />
<br />
Here's how to get all this running on your Raspberry Pi. Note that all commands are run on the host PC, not the RasPi unless stated otherwise - and you can do each section individually, there's no dependencies between them if you only want an NFS root filesystem, for example.<br />
<br />
<h2><span style="font-size: large;"><b>Debian Squeeze</b></span></h2>I've chosen to get this set up initially with the reference Debian Squeeze distro (currently debian6-19-04-2012.zip), available direct from the Raspberry Pi website - <a href="http://bit.ly/LG7DBB">RaspberryPi.org/downloads</a>. That way there should be plenty of help available in case trouble arises.<br />
Follow the notes given by <a href="http://elinux.org/RPi_Easy_SD_Card_Setup">eLinux.org</a> to get the Debian image onto an SD memory card, connect up the power/ethernet or wifi dongle/HDMI display and we'll go from there...<br />
<br />
<h2><span style="font-size: large;">Serial Output & Terminal</span></h2>A serial (aka 'UART') connection isn't strictly necessary to get the board running or playing with a running system, but it's essential if you're going to attempt to use different home-built kernels or anything low level that could possibly break, or cause not to run, the display to any more complicated devices such as a HDMI TV. For<a href="http://www.ebay.co.uk/itm/USB-2-0-to-TTL-UART-6PIN-CP2102-Module-Serial-Converter-k-/120783217156?pt=UK_Computing_Parallel_Serial_PS_2&hash=item1c1f3da204"> just a few £'s from ebay</a> (search for "USB TTL UART" and make sure you get one with a lead that has individual pin connectors, as shown in the pic below), you don't need to think about getting a soldering iron warm. The Debian distro we're using supports serial output / a terminal without changing any code, so all you need to do is:<br />
<br />
* <b>Connect the serial dongle between your host and RasPi:</b> Three pins are needed to be connected, GND (Ground), TXD (transmit) and RXD (receive). They should be marked on the USB dongle, and <a href="http://bit.ly/NeUFxr">looking at the GPIO pinout diagram here</a>, the GND is connected to P1-06, TXD to P1-08 and RXD to P1-10 - as in the picture below. It might be worth trying to swap TXD and RXD if you initially get no output. The small red USB dongle pictured obviously slots into a USB port on your host PC.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLK8ctodOvkP-r9jRp_sSuhot6cufNOYtRjXqFgmX0_qEGTk-sfOTCrO5BvQOCsn8LTfZPRgpnOBl3N9vbeONzWgxWzaaln7WESvFI9QsvPciEWQi_MXTj5gBMWYhLkl5slLmWBSUokdY/s1600/IMG_20120530_174549.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="" border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLK8ctodOvkP-r9jRp_sSuhot6cufNOYtRjXqFgmX0_qEGTk-sfOTCrO5BvQOCsn8LTfZPRgpnOBl3N9vbeONzWgxWzaaln7WESvFI9QsvPciEWQi_MXTj5gBMWYhLkl5slLmWBSUokdY/s400/IMG_20120530_174549.jpg" title="" width="400" /></a></div><br />
* <b>Start a terminal program on your host:</b> On Fedora, I use minicom (install from the command line (CLI) using '<span style="font-family: 'Courier New', Courier, monospace;"><b>sudo yum install minicom</b></span>'). If running for the first time, run with '<span style="font-family: 'Courier New', Courier, monospace;"><b>sudo minicom -s</b></span>', select 'Serial port setup' on the menu, then press to change the device to /dev/ttyUSB0 <enter>, the press <f> to change 'Hardware flow control' to 'No' and <enter> again to exit (don't change anything else). Back at the menu select 'Save setup as dfl' and then 'Exit'. From now on you can start minicom from the CLI with just '<b><span style="font-family: 'Courier New', Courier, monospace;">sudo minicom</span></b>'. Make sure you have the USB dongle plugged in whenever you run minicom, otherwise it will complain about not finding ttyUSB0.</enter><br />
<br />
* <b>Power on the Pi as normal</b>: and you should see almost the same text scrolling down the terminal screen as appears on the TV connected via HDMI. Result! - Now give the TV back to the family...<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAnarU4ob5ixeyPWSehXnzxB-y_pEJqdNrAZGhNUgCclu6Yb0VqBtfqOxXWAJyoxOOCzG45jqHmH6VVdJtu9I0jgKpYWRNalXdtZPJwROtdrn4_Y7PVaugwYb8lbgwPGZICnnlaKsDzKE/s1600/Screenshot+at+2012-05-30+18:22:34.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="356" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAnarU4ob5ixeyPWSehXnzxB-y_pEJqdNrAZGhNUgCclu6Yb0VqBtfqOxXWAJyoxOOCzG45jqHmH6VVdJtu9I0jgKpYWRNalXdtZPJwROtdrn4_Y7PVaugwYb8lbgwPGZICnnlaKsDzKE/s640/Screenshot+at+2012-05-30+18:22:34.png" width="640" /></a></div><br />
N.B. TODO - Currently on Debian Squeeze, the syslog trace from the init.d scripts isn't shown over UART. This is something that would be good to fix, as we ideally want as much verbose information on UART as possible if something goes wrong.<br />
<br />
<h2><span style="font-size: large;">NFS root filesystem</span></h2><div>Any Linux distribution needs a root filesystem to work, and Debian is no exception. At present, this filesystem is sitting on a partition on the SD card inserted into the Raspberry Pi and we want to copy it to the host PC. Why? It's much faster and easier to transfer, edit and view files on the PC than on the Pi (also saving precious Pi processing power), you can back then up (perhaps in a versioning system such as git), and access a larger filesystem space than can fit onto a meagre SD card amongst other reasons. Note that an SD card still needs to be inserted into the Pi, containing the bootloader and Linux kernel code (Getting the kernel to boot over NFS requires a more advanced bootloader such as uboot, a <a href="http://kernelnomicon.org/?p=103">FreeBSD developer has some notes here</a>). To get this running, I had to do the following:</div><div><br />
</div><div>* <b>Copy the Debian Pi root filesystem to the host PC:</b> Make a new directory on the host PC, at the root of the filesystem, e.g. '<span style="font-family: 'Courier New', Courier, monospace;"><b>sudo mkdir /nfsexports</b></span>'. Take your SD card containing the RasPi Debian image, and insert it into the host PC. Two new media devices should be mounted automagically, make a note of where by checking the tail end of the CLI command '<b style="font-family: 'Courier New', Courier, monospace;">dmesg</b><span style="font-family: inherit;">'</span> - mine reports <span style="font-family: 'Courier New', Courier, monospace;">/dev/sdb1</span> -> <span style="font-family: 'Courier New', Courier, monospace;">/dev/sdb3</span> as recently found, then run '<span style="font-family: 'Courier New', Courier, monospace;"><b>df -h</b></span>':</div><div><br />
</div><div><div><span style="font-family: 'Courier New', Courier, monospace;">mark@marke ~]$ df -h</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">Filesystem Size Used Avail Use% Mounted on</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">rootfs 50G 16G 35G 31% /</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">devtmpfs 3.9G 0 3.9G 0% /dev</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">tmpfs 3.9G 860K 3.9G 1% /dev/shm</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">/dev/mapper/vg_marke-lv_root 50G 16G 35G 31% /</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">tmpfs 3.9G 49M 3.9G 2% /run</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">tmpfs 3.9G 0 3.9G 0% /media</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">/dev/sda1 497M 184M 289M 39% /boot</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">/dev/mapper/vg_marke-lv_home 406G 141G 244G 37% /home</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">/dev/sdb1 75M 28M 47M 37% /media/95F5-0D7A</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">/dev/sdb2 1.6G 1.2G 298M 81% /media/18c27e44-ad29-4264-9506-c93bb7083f47</span></div></div><div><br />
</div><div> Mine shows device 2 of the 3 (<span style="font-family: 'Courier New', Courier, monospace;">/dev/sdb2</span>) as mounted on<span style="font-family: 'Courier New', Courier, monospace;"> /media/18c27e44-ad29-4264-9506-c93bb7083f47/</span>. This long directory is the one for which you'll need to copy the contents into the <span style="font-family: 'Courier New', Courier, monospace;">nfsexports</span><span style="font-family: inherit;"> directory created earlier with the CLI command '</span><span style="font-family: 'Courier New', Courier, monospace;"><b>sudo cp -ap /media/18c27e44-ad29-4264-9506-c93bb7083f47/* /nfsexports/</b></span><span style="font-family: inherit;">' - obviously replace the directory names with your own. Make sure you use this CLI version, and not the GUI - the 'sudo' and 'p' options passed to the cp command are important.</span></div><div><br />
</div><div>* <b>Install an NFS server on the host PC:</b> There are lots of guides on the web for installing and setting up an nfs server, so if you get into trouble, have a google. A quick guide is; On Fedora, run the CLI command '<b style="font-family: 'Courier New', Courier, monospace;">sudo yum install nfs-utils libnfsidmap</b><span style="font-family: inherit;">'. Once installed, edit the file</span><span style="font-family: 'Courier New', Courier, monospace; font-size: x-small;"> </span><span style="font-family: 'Courier New', Courier, monospace;">/etc/exports</span><span style="font-family: inherit;"> to export the <span style="font-family: inherit;">Pi </span>root directory, for me this is what</span><span style="font-family: 'Courier New', Courier, monospace;"> /etc/exports</span><span style="font-family: inherit;"> looks like:</span></div><div><span style="font-family: 'Courier New', Courier, monospace; font-size: x-small;"><br />
</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">/nfsexports 192.168.2.223(rw,sync,no_root_squash)</span></div><div><span style="font-family: 'Courier New', Courier, monospace; font-size: x-small;"><br />
</span></div><div><span style="font-family: inherit;">Where the IP address 192.168.2.223 is the ip address of my RasPi. There can be wildcards in this address, so you could export it as 192.168.2.* for everything on your local subnet, or even just * for access to everyone (although this is not </span>recommended). To finally export the directory, start the nfs server, '<span style="font-family: 'Courier New', Courier, monospace;"><b>sudo systemctl start nfs-server.service</b></span>'. To test, I'd make sure that your host PC firewall and SE-Linux is disabled, and run '<span style="font-family: 'Courier New', Courier, monospace;"><b>sudo mount -t nfs localhost:/nfsexports</b></span>' where localdir is a local host PC directory. If successful, you should see the entire root filesystem present in <localdir> .</localdir></div><div><br />
</div><div>* <b>Point the RasPi towards the NFS server:</b> Now we need to tell the RasPi to use our exported rootFS, not the one on the SD card. We can do this by modifying a file located on the SD card. If you have the SD card still inserted into your host PC, run '<span style="font-family: 'Courier New', Courier, monospace;"><b>df -h</b></span>' again and take a look at the other mounted partition, <span style="font-family: 'Courier New', Courier, monospace;">/dev/sda1</span> in my case, which is mounted on <span style="font-family: 'Courier New', Courier, monospace;">/media/95F5-0D7A</span>. So I'm going to edit this file, <span style="font-family: 'Courier New', Courier, monospace;">/media/95F5-0D7A/cmdline.txt</span>, which initially looks like:</div><div><br />
</div><div><span style="font-family: 'Courier New', Courier, monospace;">dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 root=/dev/mmcblk0p2 rootfstype=ext3 rootwait</span></div><div><span style="font-family: 'Courier New', Courier, monospace; font-size: x-small;"><br />
</span></div><div><span style="font-family: inherit;">And change it to your equivalent of (with no return characters, all one line):</span></div><div><span style="font-family: inherit;"><br />
</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 ip=192.168.2.223:192.168.2.225:192.168.2.1:255.255.255.0:rpi:eth0:off root=/dev/nfs nfsroot=192.168.2.225:/nfs4exports,vers=3 rw rootwait</span></div><div><br />
</div><div>Here, you'll need to change the two instances of '192.168.2.225' to the IP address of your host PC, and '192.168.2.223' to the IP address you want to give your RasPi - make sure that the first three address digits (192.168.2) are the same for both, and also the '192.168.2.1' matches your router IP.<br />
<br />
* <b>Fix some annoying issues with Debian:</b> (TODO - fix this!) Further to the above, I've had to fix some issues with the Debian init.d scripts that froze startup from NFS (without reaching the '<span style="font-family: 'Courier New', Courier, monospace;">login:</span>' prompt). I had to rename the following files present in the exported root NFS in the directory at <span style="font-family: 'Courier New', Courier, monospace;">/nfsexports/etc/rc2.d/</span> - <span style="font-family: 'Courier New', Courier, monospace;">S14portmap, S15nfs-common, S17ifplugd, S17rsyslog, S17sudo, S18cron</span> and <span style="font-family: 'Courier New', Courier, monospace;">S18ntp</span>. If you find this is an issue for you too, change the 'S' character in front of these files to 'K', e.g.<span style="font-family: 'Courier New', Courier, monospace; font-size: x-small;"> </span><span style="font-family: 'Courier New', Courier, monospace;">S14portmap</span> to <span style="font-family: 'Courier New', Courier, monospace;">K14portmap</span>. I'd be interested in any feedback anyone has on this issue...<br />
<br />
* <b>Boot the RasPi with an NFS root:</b> Now just unmount the SD card ,and reinsert into the RasPi before powering on again. Watch as the RasPi boots to a prompt from NFS (possibly...!).<br />
<br />
</div><h2><span style="font-size: large;"><b>Added bonus - export a desktop via VNC</b></span></h2>The final step to totally ditching the TV in favour of your PC/laptop screen is to export the raspi desktop via VNC to view and control on the host PC:<br />
<br />
* <b>Install a vncserver on the RasPi: </b>Now on the fresh RasPi prompt before you, (<u>not the host</u>!), login (username pi, password raspberry) and install tightvncserver with '<span style="font-family: 'Courier New', Courier, monospace;"><b>sudo apt-get install tightvncserver</b></span>'. Enter a VNC password when prompted, but don't bother with a view-only password. To run the server so that you can log into it remotely, type '<span style="font-family: 'Courier New', Courier, monospace;"><b>vncserver :1 -name RasPi -depth 16 -geometry 1680x1050</b></span>'. Feel free to play with the geometry or pixeldepth, but bear in mind bigger numbers may mean slower performance.<br />
<br />
* <b>Look at the RasPi desktop on your host PC: </b>Fedora should have a ready-installed remote desktop viewer, imaginatively named ' Remote Desktop Viewer'. Find it in your applications list and start it. Select the menu item Remote->Connect and change the protocol to VNC, and add your RasPi IP address and <span style="font-family: inherit;">xserver ID</span> in the Host: box (for me, this was 192.168.2.223:1 :<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGYx_9W3c2fv_jF49YpgSx9nP9sZdHJ4qjiKqBX-LNWhJSblbLx50xuS_KSsARIlT8sBRvcRdVLBswmXHcRqOBQoeB_HSYAzhvnWdotLqpMouEc0VXVwUrxb8TFu0SjgWNRvKx6XtaKDc/s1600/Screenshot+at+2012-05-30+20:19:38.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="345" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGYx_9W3c2fv_jF49YpgSx9nP9sZdHJ4qjiKqBX-LNWhJSblbLx50xuS_KSsARIlT8sBRvcRdVLBswmXHcRqOBQoeB_HSYAzhvnWdotLqpMouEc0VXVwUrxb8TFu0SjgWNRvKx6XtaKDc/s640/Screenshot+at+2012-05-30+20:19:38.png" width="640" /></a></div><br />
...then click 'Connect' at the bottom of the dialog box. You should be then prompted for the VNC password you chose for the RasPi earlier. Enter it, and bask in the glory of the Raspberry Pi desktop:<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuiCgeM9lTGQKIPwqUgZBBn3k8t-Qakju44PyAt2msG_5QTlL77xiG_s73-1Qzfn_URbLRdKo12hh0doSjU9wzvFEZJ6dHcX3gduPpNxRzKX7WjSablseZhQAUHfzxqvj2OsPZdD4b8Jk/s1600/Screenshot+at+2012-05-30+20:32:44.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="346" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuiCgeM9lTGQKIPwqUgZBBn3k8t-Qakju44PyAt2msG_5QTlL77xiG_s73-1Qzfn_URbLRdKo12hh0doSjU9wzvFEZJ6dHcX3gduPpNxRzKX7WjSablseZhQAUHfzxqvj2OsPZdD4b8Jk/s640/Screenshot+at+2012-05-30+20:32:44.png" width="640" /></a></div><br />
* <b>Happy hacking!</b><br />
<b><br />
</b><br />
<b><br />
</b></div>Anonymoushttp://www.blogger.com/profile/16449496400214275214noreply@blogger.com8tag:blogger.com,1999:blog-8127306923314168105.post-81198519400698797652012-04-30T23:17:00.000+01:002012-05-31T09:32:30.411+01:00Calm that clicking laptop hard drive in FedoraEvery time I install a Fedora distro on a laptop, I have to stop the hard drive clicking every few seconds or so by fixing the over-aggressive Advanced power Management (APM) setting.<br />
<br />
To provide a permanent fix, the setting needs to be applied every time it boots. Here's how to fix it once and forget about it:<br />
<ul>
<li><span style="font-family: inherit;">Edit a new file /etc/rc.d/init.d/hdparm, with the contents (where /dev/sda is the noisy HDD of your choice)</span></li>
</ul>
<span style="font-family: 'Courier New', Courier, monospace;">#!/bin/bash</span><br />
<span style="font-family: 'Courier New', Courier, monospace;">/sbin/hdparm -B 254 /dev/sda</span><br />
<ul>
<li><span style="font-family: inherit;">Make sure this file has the correct permissions to be run:</span></li>
</ul>
<span style="font-family: 'Courier New', Courier, monospace;">$ sudo chmod 755 hdparm</span><br />
<ul>
<li><span style="font-family: inherit;">Now create a link for the rc level for the hdparm script to run at startup:</span></li>
</ul>
<div>
<span style="font-family: 'Courier New', Courier, monospace;">$ cd /etc/rc.d/rc3.d/</span></div>
<div>
<span style="font-family: 'Courier New', Courier, monospace;">$ sudo ln -s ../init.d/hdparm S80hdparm</span></div>
<div>
<span style="font-family: 'Courier New', Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: inherit;">Now on reboot, the hard drive should be much quieter, won't wear out as quickly and will allow you to get on with your work...but it will eat a little more power than before. You can change the</span><span style="font-family: 'Courier New', Courier, monospace;"> hdparm -B xxx </span><span style="font-family: inherit;">number to something a little lower if power use is an issue. To check that the changes are successful, run:</span></div>
<br />
<span style="font-family: 'Courier New', Courier, monospace;">$ sudo hdparm -B /dev/sda</span><br />
<span style="font-family: 'Courier New', Courier, monospace;">/dev/sda:</span><br />
<span style="font-family: 'Courier New', Courier, monospace;"> APM_level<span class="Apple-tab-span" style="white-space: pre;"> </span>= 254</span><br />
<div>
<br /></div>Anonymoushttp://www.blogger.com/profile/16449496400214275214noreply@blogger.com1tag:blogger.com,1999:blog-8127306923314168105.post-79252631245998967172011-07-05T13:04:00.000+01:002012-05-31T09:33:12.808+01:00Blanket bomb a C source file with function entry/exit trace'Blanket bomb' a file with trace for function entry / exit, from vim:<br />
<br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">:%s/^{/{\r\ \ \ \ printf(">>>%s:%d\\n", __FUNCTION__,__LINE__);\r/</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">:%s/\(\ *\)\(return.*\n\)/\1printf("<<<%s:%d\\n", __FUNCTION__,__LINE__);\r\1\2/</span><br />
<br />
or just using sed from the command line:<br />
<br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">sed 's/^{/{\n\ \ \ \ printf(">>>%s:%d\\n", __FUNCTION__,__LINE__);\n/' Filename.c > Filename.c</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">sed 's/\(\ *\)\(return.*\n\)/\1printf("<<<%s:%d\\n", __FUNCTION__,__LINE__);\n\1\2/' Filename.c > Filename.c</span><br />
<div>
<br /></div>Anonymoushttp://www.blogger.com/profile/16449496400214275214noreply@blogger.com0tag:blogger.com,1999:blog-8127306923314168105.post-91942377586268181982011-06-19T12:13:00.036+01:002012-05-21T23:19:12.098+01:00How to use a 64bit machine to build & install a 32bit Linux kernel for another PCI've often had the problem of having to build a Linux kernel for a slow machine, while having a much faster machine available. In my case, the slow machine is an Asus EEE-PC running 32bit Fedora, and my faster machine is an Intel i7 running 64bit Fedora. <br />
<div>
<br /></div>
<div>
In order for the faster 64bit machine to build a kernel and install it on the slower 32bit, you'll need to bear in mind three issues:</div>
<div>
<br /></div>
<div>
* How to build a 32bit kernel on a 64bit machine</div>
<div>
* How to give access to the slower machine for installing the kernel</div>
<div>
* How to actually install the kernel on the slower machine.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<span class="Apple-style-span"><b>How to build a 32bit kernel on a 64bit machine</b></span></div>
<div>
<br /></div>
<div>
Assuming you already have kernel source and a suitable .config available on the faster machine, simply run your usual kernel make, with the addition of ARCH=x86 and with menuconfig as a target. In my case, this would be:</div>
<div>
<br />
<span class="Apple-style-span" style="font-family: 'courier new';">fast$: make ARCH=x86 -j16 O=/home/mark/Source/build-linux-2.6-32bit/ menuconfig</span></div>
<div>
<span class="Apple-style-span"><br />
</span></div>
<div>
<span class="Apple-style-span">Then deselect the top option "[ ] 64-bit kernel" before exiting and saving. Now you can build the kernel with:</span></div>
<div>
<span class="Apple-style-span"><br />
</span></div>
<div style="font-family: courier new;">
<span class="Apple-style-span">fast$: make ARCH=x86 -j16 O=/home/mark/Source/build-linux-2.6-32bit/ </span></div>
<div>
<span class="Apple-style-span"><br />
</span></div>
<div>
<span class="Apple-style-span">Note that the '-j' option gives the number of simultaneous jobs the make can handle, which is usually 2x the number of processors/cores in your system. I have 8 cores, so I use them all by specifying '-j16'. </span></div>
<div>
<span class="Apple-style-span"><br />
</span></div>
<div>
<span class="Apple-style-span">The O= specifies the build directory where the build objects are placed. The .config you wish to use should go in here. There are many reasons why it is a good idea to have separate build and source dirs. </span></div>
<div>
<span class="Apple-style-span"><br />
</span></div>
<div>
<span class="Apple-style-span"><br />
</span></div>
<div>
<span class="Apple-style-span"><span class="Apple-style-span"><b>How to give access to the slower machine for installing the kernel</b></span></span></div>
<div>
<span class="Apple-style-span"><span class="Apple-style-span"><b><br />
</b></span></span></div>
<div>
<span class="Apple-style-span">You could also use a Samba share for this, but I prefer using NFS. Note that this isn't the most secure method, using NFS in this way, but it works.</span></div>
<div>
<span class="Apple-style-span"><br />
</span></div>
<div>
<span class="Apple-style-span">On the remote, slower, machine with NFS installed, export your root (/) and boot (/boot) directories by editing the /etc/exports file or using the system-config-nfs program. My /etc/exports file looks like this:</span></div>
<div>
<span class="Apple-style-span"><br />
</span></div>
<div style="font-family: "Courier New",Courier,monospace;">
<span class="Apple-style-span">/boot *(rw,sync,no_root_squash)</span></div>
<div style="font-family: "Courier New",Courier,monospace;">
<span class="Apple-style-span">/ *(rw,sync,no_root_squash)</span></div>
<div>
<span class="Apple-style-span"><br />
</span></div>
<div>
<span class="Apple-style-span">Then restart the NFS services nfs. nfslock and rpcbind (portmap on older systems):</span></div>
<div>
<span class="Apple-style-span"><br />
</span><br />
<div style="font-family: "Courier New",Courier,monospace;">
<span class="Apple-style-span">slow$: sudo service rpcbind restart</span></div>
<div style="font-family: "Courier New",Courier,monospace;">
<span class="Apple-style-span">slow$: sudo service nfslock restart</span><span class="Apple-style-span"> </span></div>
</div>
<div style="font-family: "Courier New",Courier,monospace;">
<span class="Apple-style-span">slow$: sudo service nfs restart</span></div>
<div>
<span class="Apple-style-span"><br />
</span></div>
<div>
Now it should be possible to remotely mount these exports on the faster machine, replace <<remote-ip>remote-ip> by the name or IP address of the remote PC:</remote-ip></div>
<div>
<br /></div>
<div style="font-family: "Courier New",Courier,monospace;">
fast$: mkdir ~/remote-root</div>
<div face="courier new" style="font-family: "Courier New",Courier,monospace;">
fast$: mkdir ~/remote-boot </div>
<div face="courier new" style="font-family: "Courier New",Courier,monospace;">
fast$: sudo mount -t nfs <remote-ip><remote-ip<remote-ip>>:/ ~/remote-root</remote-ip></remote-ip></div>
<div face="courier new" style="font-family: "Courier New",Courier,monospace;">
fast$: sudo mount -t nfs <remote-ip><remote-ip<remote-ip>>:/boot ~/remote-boot</remote-ip></remote-ip></div>
<div style="font-family: "Courier New",Courier,monospace;">
</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<span class="Apple-style-span"><b>Installing the kernel on the slower machine</b></span></div>
<div>
<span class="Apple-style-span"><b><br />
</b></span></div>
<div>
<span class="Apple-style-span">Almost there, we just need to install the kernel image on the slower machine and reboot to run it!</span></div>
<div>
<span class="Apple-style-span">For this to work seamlessly, a small patch is needed to the /sbin/installkernel script (on my Fedora machine, at least):</span></div>
<div>
<span class="Apple-style-span"><br />
</span></div>
<div face="courier new" style="font-family: "Courier New",Courier,monospace;">
<span class="Apple-style-span"></span><br />
<div>
<span class="Apple-style-span">--- /sbin/installkernel</span></div>
<div>
<span class="Apple-style-span">+++ /sbin/installkernel</span></div>
<div>
<span class="Apple-style-span">@@ -32,7 +32,10 @@</span></div>
<div>
<span class="Apple-style-span">cfgLoader=1</span></div>
<div>
<span class="Apple-style-span">fi</span></div>
<div>
</div>
<div>
<span class="Apple-style-span">-LINK_PATH=/boot</span></div>
<div>
<span class="Apple-style-span">+if [ -z "$LINK_PATH" ]; then</span></div>
<div>
<span class="Apple-style-span">+ LINK_PATH=/boot</span></div>
<div>
<span class="Apple-style-span">+fi</span></div>
<div>
<span class="Apple-style-span">+</span></div>
<div>
<span class="Apple-style-span">RELATIVE_PATH=`echo "$INSTALL_PATH/" | sed "s|^$LINK_PATH/||"`</span></div>
<div>
<span class="Apple-style-span">KERNEL_VERSION=$1</span></div>
<div>
<span class="Apple-style-span">BOOTIMAGE=$2</span></div>
</div>
<div>
<span class="Apple-style-span"><br />
</span></div>
<div>
<span class="Apple-style-span">Now we can export some environment variables to tell the kernel make install scripts where we would like our kernel and modules to be installed (Note the difference between remote-<b>r</b>oot and remote-<b>b</b>oot!), while logged in as the <b>superuser</b>:</span></div>
<div>
<span class="Apple-style-span"><br />
</span></div>
<div style="font-family: courier new;">
<span class="Apple-style-span">fast#: export INSTALL_PATH=/home/mark/remote-boot/</span></div>
<div style="font-family: courier new;">
<span class="Apple-style-span">fast#: export LINK_PATH=/home/mark/remote-boot/</span></div>
<div style="font-family: courier new;">
<span class="Apple-style-span">fast#: export INSTALL_MOD_PATH=/home/mark/remote-root/</span></div>
<div>
<span class="Apple-style-span"><br />
</span></div>
<div>
<span class="Apple-style-span" style="font-size: 100%;"><span class="Apple-style-span">For a Samba export, just replace the exported variables above with their equivalents (usually ~/.gvfs/root\ on\ remote/ or similar for Fedora). </span></span></div>
<div>
<span class="Apple-style-span" style="font-size: 100%;"><span class="Apple-style-span">Now we can simply run the install make targets on our fast machine, which installs to our slower, 32bit target PC:</span></span></div>
<div>
<span class="Apple-style-span"><span class="Apple-style-span" style="font-size: 16px;"><span style="font-size: 100%;"> </span><br />
</span></span></div>
<div style="font-family: courier new;">
<span class="Apple-style-span">fast#: make ARCH=x86 O=/home/mark/Source/build-linux-2.6-32bit/ modules_install install</span></div>
<div>
<br />
Then edit your grub.conf (or equivalent, as I haven't yet found out why /sbin/new-kernel-pkg doesn't update this automagically on the remote machine).<br />
<br /></div>
<div>
Reboot, select the new kernel from the GRUB menu (or equivalent), and happy hacking!</div>
<div>
<br /></div>Anonymoushttp://www.blogger.com/profile/16449496400214275214noreply@blogger.com0tag:blogger.com,1999:blog-8127306923314168105.post-45247344400043145292009-02-19T11:24:00.000+00:002012-05-31T09:33:59.536+01:00Removing trailing whitespace & tabs, at the end of lines in source code files.I always edit files in Vim with highlighting of tabs and spaces at the end of lines, in case I catch myself putting them in, by adding these lines to my ~/.vimrc :<br />
<br />
<span style="font-family: 'Courier New', Courier, monospace;"> let c_space_errors=1</span><br />
<span style="font-family: 'Courier New', Courier, monospace;"> highlight WhitespaceEOL ctermbg=red guibg=red</span><br />
<span style="font-family: 'Courier New', Courier, monospace;"> match WhitespaceEOL /[\s\t]+$/</span><br />
<br />
<span style="font-family: inherit;">But occasionally, I encounter a file that is caked in red squares and lines at the end of most lines, and a neat way of removing these in one go is to run the following command on the file (entered in command mode, after the ':' prompt:</span><br />
<br />
<span style="font-family: 'Courier New', Courier, monospace;"> %s/[\ \t]*$//</span><br />
<br />
Even more rarely, I get an entire directory of such files and need to blatt all the trailing whitespace. I either use this line directly from the command line (bash), or place it in a quick script to do the job:<br />
<br />
<span style="font-family: 'Courier New', Courier, monospace;">sed 's/[\ \t]*$//' file1.c > file2.c</span>Anonymoushttp://www.blogger.com/profile/16449496400214275214noreply@blogger.com0tag:blogger.com,1999:blog-8127306923314168105.post-50853851310365229992009-01-05T11:43:00.002+00:002010-01-25T22:36:27.572+00:00Keeping it all in mindBack in the 50's, the psychologist George A. Miller wrote a paper entitled '<a href="http://psychclassics.yorku.ca/Miller/">The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information</a>'. While quite a dry and involved read, it's findings (and other corroborating evidence) are of great significance to anyone creating a software design, architecture, process, checklist, API or code listing. In fact, anything where a large amount of items or concepts can be grouped or categorised may benefit from its findings, namely that:<br /><br /><blockquote><span style="font-weight: bold; font-style: italic;">There is a limit to the number of items we can keep in short-term memory - 7 plus or minus 2.</span><br /></blockquote><br />While not applicable to basic lists - where the reader doesn't simultaneously need to be aware of every item in the list - it is a powerful <a href="http://en.wikipedia.org/wiki/Heuristics">heuristic</a> when applied to, for example, a software architecture layer. In this case a person trying to understand the architecture would benefit from awareness of all parts of the architecture at the level being studied, and in doing so finds the architecture more easily discoverable and transparent than if the layer were more finely partitioned.<br /><br />Another great example comes in the form of a process description or checklist, especially where safety is critical. Here it is important not to miss any steps or checks, and the chance of doing so is reduced if the number of steps are kept to a manageable amount.<br /><br />It's difficult to list everything to which this heuristic can be applied - there are always opportunities available if you can spot them. With practice, it can become second nature - like many other design heuristics (The appendix of 'The Art of Systems Architecting' by Maier and Rechtin is a great source for such things).<br /><br />If you want some more examples, just take a look at some of the other posts in this blog... :o), but also next time you're wading through a mass of code, design information or an architecture description and finding it difficult then bear in mind how it is organised, is the author trying to make you 'bite off more than you can chew'?<br /><br />This applies even more so if you are designing or developing. Your own software (especially open source) could benefit greatly if it's more easily discoverable, understandable and therefore maintainable by others who may not be so familiar with its workings.Anonymoushttp://www.blogger.com/profile/16449496400214275214noreply@blogger.com0tag:blogger.com,1999:blog-8127306923314168105.post-49064451975571284512008-12-17T11:29:00.005+00:002012-06-13T11:43:44.182+01:00Peer code reviewsPeer code reviews can have a devastating effect on software bugs, greatly reducing their number and scope. However, not all code reviews are the same and some are more effective than others. There is no one-shoe-fits-all guide to code reviewing, but there are ways of getting the right code review process for your situation.<br />
At the bottom end of the effectiveness scale is a peer looking at a screen resplendent with the review code. This is better than no review, but still has drawbacks:<br />
<ul>
<li>The review procedure will vary according to the reviewer and their state of mind, some reviewers will be more picky about certain issues or even miss things others would find easily.</li>
<li>Any problems found and commented on in the review may not be fixed - due to forgetfulness, other 'more important' pressing issues or thinking that they are trivial (to the author under review). Anyhow, the problems are probably not followed up afterwards to make sure they're resolved.</li>
<li>A lack of reviewing guidelines can lead to some comments being taken a bit too personally.</li>
<li>Most people when reviewing will only look through the code once from top to bottom, and pick out any issues of any type as they go along (syntax issues, loop counting and memory allocations for example). This is much less efficient than looking for one type of issue at a time, and scanning the code once for each type - if you don't believe me, try it!</li>
</ul>
In order to mitigate missing as many bugs as possible during a code review, the generally agreed best practice contains the following guidelines:<br />
<ol>
<li>Use a checklist, some examples can be found in the links below.</li>
<li>Review for one type of issue at a time.</li>
<li>Improve the checklist to look for bugs which slip through the reviews as time goes on.</li>
<li>use code printouts on which you can scribble comments - I've not come across any software tool that is as effective as paper.</li>
<li>Log any issues not resolved during the review process, for example in the form of a bug database entry - even it does seem trivial now, experience has taught me that it might be the the most important thing in the world next month! It should also be possible to make the reviewer aware of any previous issues found with the code under review.</li>
<li>Have your process geared towards reviewing little and often.</li>
<li>Be balanced - make sure to note what the code does successfully, as well as it's issues and offer praise where it is due.</li>
</ol>
More thorough details and further reading can be found here:<br />
<ul>
<li><a href="http://my.safaribooksonline.com/9780768685855/ch14lev1sec2?portal=informit"></a><a href="http://my.safaribooksonline.com/9780768685855/ch14lev1sec2?portal=informit">http://my.safaribooksonline.com/9780768685855/ch14lev1sec2?portal=informit</a><br />
<a href="http://my.safaribooksonline.com/9780768685855/ch14lev1sec2?portal=informit"></a></li>
<li><a href="http://en.wikipedia.org/wiki/Personal_Software_Process">http://en.wikipedia.org/wiki/Personal_Software_Process</a></li>
<li><a href="http://www.macadamian.com/index.php?option=com_content&task=view&id=27&Itemid=31">http://www.macadamian.com/index.php?option=com_content&task=view&id=27&Itemid=31</a></li>
<li><a href="http://smartbear.com/docs/BestPracticesForPeerCodeReview.pdf">http://smartbear.com/docs/BestPracticesForPeerCodeReview.pdf</a></li>
<li><a href="http://www.developer.com/tech/article.php/3579756">http://www.developer.com/tech/article.php/3579756</a></li>
</ul>Anonymoushttp://www.blogger.com/profile/16449496400214275214noreply@blogger.com0tag:blogger.com,1999:blog-8127306923314168105.post-70710353973639339792008-11-20T16:50:00.009+00:002012-06-13T11:46:46.280+01:00Easy SCMSoftware Configuration Management (SCM) is an important part of software development - without robust SCM, no software development project can achieve a high quality result.<br />
To this end, there are a few simple characteristics that indicate good SCM - Personally, I haven't encountered a successful project whose SCM plan does not have these characteristics:<br />
<ol>
<li> Be a snail - leave a trail! Changes are never versioned without a documented reason.</li>
<li> Documented reasons are always either defect fixes, or software enhancements. Both defects and enhancements can be identified by an ID without the need to log detailed explanations for each change - all that is required is for the change to cross reference the defect or enhancement (from a bug database or project plan, for example).</li>
<li> Each change is made for a logical reason, and one reason only. Changes aren't combined or split over several different versions.</li>
<li> It is easy to 'do some archaeology' by quickly running older configurations. e.g. There exists a store of automated nightly builds with corresponding configuration IDs available.</li>
<li> The way of working is cultural, not 'enforced' - although it should be difficult to 'absent-mindedly' not follow the process. i.e. You can't make a change without a defect or enhancement ID to log against it.</li>
<li>The SCM system is transparent and visible, so others with an interest can view progress (although not necessarily take part in the activities - that can be damaging to progress!).</li>
<li> Each change represents no more than two man weeks work.</li>
</ol>
So, how does your project rate against these?Anonymoushttp://www.blogger.com/profile/16449496400214275214noreply@blogger.com0tag:blogger.com,1999:blog-8127306923314168105.post-24421987929709458792008-10-31T23:06:00.006+00:002008-12-29T18:17:06.796+00:00The IEEE software standardsThe <a href="http://standards.ieee.org/software/">IEEE software standards</a> is a very useful set of documents, even if you're not in an organisation that is into particularly formal software development methods. The standards do not necessarily have to be implemented to use the information they contain.<br />For example, they can be a good starting point if you're ever asked to plan or produce documents such as a software requirements specification (SRS), software configuration management plan (SCM plan), or a software architecture document (although for this I've found HP have available a much more usable document, "<a href="http://rise.uni.lu/tiki/se2c-bib_download.php?id=417">A Template for Documenting Software and Firmware Architectures</a>").<br /><br />The only downside is that they are not free, but many organizations have IEEE membership and have the standards available for employees.<br /><br />Here's the list of IEEE standards and guides I've found to be of use, in no particular order, vaguely grouped for readability:<br /><br /><br /><span style="font-weight: bold; font-style: italic;">Project Management</span><br /><ul><li><span style="font-weight: bold;">1490 - Adoption of PMI Standard - A Guide to the Project Management Body of Knowledge</span> (PMBOK) - A guide to project management knowledge and practices in general widespread use.</li><li><span style="font-weight: bold;">1045 - Standard for Software Productivity Metrics</span> - A bit dated, but the practices and concepts it gives for measuring productivity are well described.</li><li><span style="font-weight: bold;">1058 - Standard for Software Project Management Plans</span> - Worth a look if you're unsure as to what you'll need to put in a project management plan, but a little too specific to the IEEE way of doing things.</li><li><span style="font-weight: bold;">ISO/IEC 12207.0 - Software Life Cycle Processes</span> - Attempts to classify all the processes that contribute to software, and puts them into a framework.</li><li><span style="font-weight: bold;">ISO/IEC 12207.1 -</span> <span style="font-weight: bold;">Software Life Cycle Processes</span> <span style="font-weight: bold;">- Life Cycle Data </span>- Attempts to classify all the processes that contribute to software, and puts them into a framework.</li><li><span style="font-weight: bold;">982.1 - Standard Dictionary of Measures to Produce Reliable Software</span> - A bit ancient (1988), but tries to give a description of measures that can be made on software and software projects that indicate the quality of the software, with all the mathematical rigour that involves.</li><li><span style="font-weight: bold;">1220 - Standard for Application and Management of the Systems Engineering Process</span> - Similar to 1074, again use CMMI instead.</li></ul><br /><span style="font-style: italic; font-weight: bold;">Software Engineering</span><br /><ul><li><span style="font-weight: bold;">Guide to the Software Engineering body of Knowledge</span> (SWEBOK) - <a href="http://www.swebok.org/">Available here</a>. A guide to software engineering knowledge and practices in general widespread use.</li><li><span style="font-weight: bold;">610 - Glossary of Software Engineering Terminology</span> - A little dated, and leans towards IEEE understanding (as opposed to widespread understanding) of some terms, but can still be useful for reference, and is referenced by most other IEEE standards.</li><li><span style="font-weight: bold;">828 - Standard for Software Configuration Management Plans</span> - If you need to produce an SCM plan and have nowhere to start, this will show you the way. Also discusses useful activities involved in managing and adhering to a SCM plan.</li><li><span style="font-weight: bold;">1002 - Taxonomy for Software Engineering Standards</span> - Also quite ancient. A taxonomy is a method for classification, and this describes how a set of standards can be chosen to cover all necessary areas of software engineering.</li><li><span style="font-weight: bold;">1028 - Standard for Software Reviews</span> - Gives criteria and practices for reviewing software - be it for development, acquisition or operation.</li><li><span style="font-weight: bold;">1061 - Standard for a Software Quality Metrics Methodology</span> - Aimed at those measuring or assessing the quality of software, in a formal manner.</li><li><span style="font-weight: bold;">1074 - Standard for Developing Software Life Cycle Processes</span> - Attempts to define a way of creating a good sw process. Not half as useful as <a href="http://www.sei.cmu.edu/cmmi/">CMMI</a>.</li><li><span style="font-weight: bold;">1471 - Recommended Practice for Architectural Description of Software-intensive Systems</span> - A version of Kruchen 4+1 for software architecture.</li><li><span style="font-weight: bold;">1042 -</span> <span style="font-weight: bold;">Guide to Software Configuration Management</span> - Practices for performing SCM, and managing SC items within a project.</li></ul><br /><span style="font-weight: bold; font-style: italic;">Quality </span><br /><ul><li><span style="font-weight: bold;">730.1 - Guide for Software Quality Assurance Planning</span> - Great if you need to write and manage a Software Quality Assurance Plan, and have no idea of where to start - this lists and discusses the contents of such a document and good practices involved in managing it.</li><li><span style="font-weight: bold;">730 - Standard for Software Quality Assurance Plans </span>- A lot more detailed than 730.1, and gives the format and content requirements a SQA plan should meet to conform to the IEEE standard.</li><li><span style="font-weight: bold;">830 - Recommended Practice for Software Requirements Specifications</span> - Describes what should be contained in a good (albeit formal) SRS, and gives several example outlines of SRS documents.</li><li><span style="font-weight: bold;">1062 - Recommended Practice for Software Acquisition</span> - Obtaining and using the right software, that's right for your needs is not an easy task. This gives some useful practices on performing this task.</li><li><span style="font-weight: bold;">1063 - Standard for Software User Documentation</span> - Good practices for putting the relevant information into your user documentation.</li><li><span style="font-weight: bold;">1219 - Standard for Software Maintenance</span> - This standard describes an iterative process for managing and executing software maintenance activities.</li><li><span style="font-weight: bold;">1228 - Standard for Software Safety Plans</span> - Establishes criteria for the content of a software safety plan.</li><li><span style="font-weight: bold;">1233 - Guide for Developing System Requirements Specifications</span> - A guide to obtaining and managing requirements in an SRS.</li></ul><br /><span style="font-weight: bold; font-style: italic;">Testing</span><br /><ul><li><span style="font-weight: bold;">829 - Standard for Software Test Documentation</span> - Gives a description of what should be in software test documentation (cases, logs, plans etc), and why. Gives the form and content of test documents, but does not say which documents are needed in particular situations.</li><li><span style="font-weight: bold;">1008 - Standard for Software Unit Testing</span> - A standard for planning, building and executing unit tests.</li><li><span style="font-weight: bold;">1012 - Standard for Software Verification and Validation Plans</span> - Gives a standard for V&V plans, describing what inputs, outputs and criteria are recommended for a project's V&V activities and should be recorded in a plan.</li><li><span style="font-weight: bold;">1044 - Guide to Classification of Software Anomalies</span> - How to write and manage bug reports, very useful as in my experience, even some very experience software engineers have trouble in taking the time to write useful bug reports.</li><li><span style="font-weight: bold;">1059 - Guide for Software Verification and Validation Plans</span> - Gives a process for using and managing V&V plans.</li></ul><br /><span style="font-weight: bold; font-style: italic;">Design</span><br /><ul><li><span style="font-weight: bold;">1016.1 - Guide to Software Design Descriptions</span> - Concentrates on documenting and using 'views' into a design, much like the <a href="http://www.ibm.com/developerworks/wireless/library/wi-arch11/">Krutchen 4+1</a> paper.</li><li><span style="font-weight: bold;">1016 - Recommended Practice for Software Design Descriptions</span> - How to go about writing a SDD, within the project life cycle.</li><li><span style="font-weight: bold;">1209 - Recommended Practice for the Evaluation and Selection of CASE Tools</span> - CASE tools are notoriously difficult to choose and use successfully (see an earlier <a href="http://var-blog-messages.blogspot.com/2008/06/can-you-sell-software-process.html">post</a>), this tries to guide you around this particular minefield.</li><li><span style="font-weight: bold;">1320.1 - Standard for Functional Modelling Language - syntax and semantics for IDEF0</span> - A formal system/process modelling technique. Quite heavy, I've found <a href="http://syque.com/quality_tools/toolbook/Process/etvx_quality.htm">ETVX</a> to be much more usable.</li><li><span style="font-weight: bold;">1320.2 - Standard for Conceptual Modelling Language Syntax and Semantics for IDEF1X97 (IDEFobject)</span> - Again, a formal system/process modelling technique. Quite heavy, I've found <a href="http://syque.com/quality_tools/toolbook/Process/etvx_quality.htm">ETVX</a> to be much more usable.</li><li><span style="font-weight: bold;">1348 - Recommended Practice for the Adoption of CASE Tools</span> - Once you've got a CASE tool, the fun doesn't stop there. Carries on from 1209.</li></ul>Anonymoushttp://www.blogger.com/profile/16449496400214275214noreply@blogger.com0tag:blogger.com,1999:blog-8127306923314168105.post-80377386960641392752008-09-13T09:51:00.002+01:002008-12-13T10:49:17.157+00:00Can you sell a software process?Most software groups (and I'm sure that this doesn't apply to just software groups) with any sort of history have a record of trying out prescribed process improvement after process improvement, many of them ending in some sort of failure - generally meaning that the process didn't meet the expectation or goals it was introduced with.<br /><br />This can be especially true of process improvements that are built around an expensive software tool. I've had my fair share of colleagues lost to some sales type or evangelist touting the latest and greatest in software silver bullets. The use of the word 'evangelist' is telling, and conjures up images of an overzealous individual pushing unfounded ideas at you - it gives an indication of the mysic aura that a 'process guru' can give themselves by which one can be blinded.<br /><br />CASE tools aren't that useful when they constrain you too much to one process, and some companies that push tools as well as processes rely on a self-created cult-like following whipped up from the guru of the day's scribblings. When these words of wisdom are unfounded, unproven and (even better) not understandable, they can be taken by savvy marketers to sell the tool.<br /><br />A prescribed process (from a heavy one to a very light or 'agile' one) may fit perfectly into the company, work environment, or culture it grew up in <span style="font-weight: bold;">but</span> moving it into another environment, more often than not, can transform it into a useless or even dangerous beast.<br /><br />In the translation, you can lose the meaning of important concepts, miss out on understanding vital assumptions and all the other pitfalls associated with one person attempting to describe a complex system to another. No matter how many reams of written documents or concise manifestos one writes, some things just get missed - even when the two communicator's work cultures are very similar.<br /><br />What would be much more useful and less prone to failure is an expert in processes to mentor a group over a period of time (even indefinitely), who can prescribe and tailor a process to meet their current needs, to put in place mechanisms to measure the success and progress of the group and ultimately advance the group's software capability - a process specialist or change agent.<br /><br />A software CASE tool is no substitute for an experienced mentor!Anonymoushttp://www.blogger.com/profile/16449496400214275214noreply@blogger.com0tag:blogger.com,1999:blog-8127306923314168105.post-7132720584856851092008-08-23T10:15:00.002+01:002008-12-04T11:26:36.499+00:00Ethical policies of professional IT bodiesI consider myself a professional software engineer. To be recognised as such in the wider community involves, amongst other things, subscribing to a professional ethical policy. Personally, I feel an ethical policy must involve not working on anything that will result in harm or even death to others (i.e. no 'defence' work). So which professional IT bodies would be suitable for me to join, with this in mind?<br /><br /><span style="font-weight: bold;">The BCS</span> - avoids the issue by not defining an ethical policy - only has a <a href="http://www.bcs.org/server.php?show=nav.6030">code of conduct</a> which<br /><br /><blockquote>'... governs your personal conduct as an individual member of the BCS and not the nature of business or ethics of the relevant authority'. </blockquote><br />This is a bit of a cop-out, and avoids the difficult questions altogether.<br /><br /><span style="font-weight: bold;">The IEEE</span> - does have a <a href="http://www.ieee.org/portal/pages/iportals/aboutus/ethics/code.html">code of ethics</a> - which does, on first glance seem to fit the bill:<br /><br /><span class="moduleText"><span style="font-family:Arial,Helvetica,sans-serif;"></span></span><blockquote><span class="moduleText"><span style="font-family:Arial,Helvetica,sans-serif;">1. to accept responsibility in making decisions consistent with the safety, health and welfare of the public, and to disclose promptly factors that might endanger the public or the environment;</span> </span><br />....<br />9. <span class="moduleText"><span style="font-family:Arial,Helvetica,sans-serif;">to avoid injuring others, their property, reputation, or employment by false or malicious action;</span></span></blockquote><span class="moduleText"><span style="font-family:Arial,Helvetica,sans-serif;"></span> <br />But I'm also aware that the IEEE has many high profile members who work in the defence industry and also produces specifications created and used in the defence industry.<br /><br /><span style="font-weight: bold;">The ACM </span>- <a href="http://www.acm.org/about/code-of-ethics">Much better, uses the term 'human rights'</a>, and takes the most care of these three bodies to make clear their ethical stance and to give direction:<br /><br /></span><p></p><blockquote>This principle concerning the quality of life of all people affirms an obligation to protect fundamental human rights and to respect the diversity of all cultures. An essential aim of computing professionals is to minimize negative consequences of computing systems, including threats to health and safety. When designing or implementing systems, computing professionals must attempt to ensure that the products of their efforts will be used in socially responsible ways, will meet social needs, and will avoid harmful effects to health and welfare.</blockquote><p></p> <blockquote>In addition to a safe social environment, human well-being includes a safe natural environment. Therefore, computing professionals who design and develop systems must be alert to, and make others aware of, any potential damage to the local or global environment.</blockquote>There's also an interesting discussion of ethics in this paper <a href="http://www.scs.org/ethics/scsEthicsCodeRationale.pdf">here</a>, and a very thorough discussion of ICT bodies and ethics, with recommendations<a href="http://exlibris.memphis.edu/ethics21/archives/05eei/papers/sandy.pdf"> here</a>.<br /><br />I'm still undecided - I believe I need to spend a lot more time researching this topic, as the standard policy for most IT organisations appear to allow members working in defence - even if it's only discernible by reading between the lines.Anonymoushttp://www.blogger.com/profile/16449496400214275214noreply@blogger.com0tag:blogger.com,1999:blog-8127306923314168105.post-71067181877691909022008-07-29T10:33:00.009+01:002009-05-17T22:16:12.770+01:00A Linux Literary TrilogyWhen I started delving into the world of Linux development, I was not only befuddled by the strange code layout and conventions, I also found the culture and ethos of Linux very confusing. There were three books that were invaluable in pulling my understanding out of this quagmire, which I'll mention briefly:<br /><br />The Linux Programmer's Toolbox by John Fusco. Allows your Linux usefulness to go from 0-60 in six seconds. Not totally exhaustive on all tools Linux, but it's brilliant for giving you an up to date map of the Linux development environment. Not only that, but it can give you a greater understanding of any development environment which uses make or GCC. I really can't recommend this book highly enough - it's so well written and laid out that I use it regularly as a reference manual. Not only does it cover many of the useful Linux tools (and shows you how to look for the rest), it covers how the kernel works, gnu make systems, debugging and has a nice comprehensive guide to using Vim and Emacs effectively (although, sadly, it doesn't say which is best - but I think you know the answer to that).<br /><br /><a href="http://www.faqs.org/docs/artu/">The Art of Unix Programming</a> by Eric S. Raymond (Link is to the full book text). Once you have all the tools, you'll want to know how to use them. Not only does this book give you the why behind the what of Linux - explaining the design and implementation mechanisms that have shaped it, it gives an excellent narrative on the history and context in which these mechanisms evolved from someone who was there at the time. Make sure you absorb his <a href="http://www.faqs.org/docs/artu/ch01s06.html">17 basics of the Unix philosophy</a>, don't trust yourself to touch a line of code until you do! I'll repeat them here, just in case you miss them:<br /><div class="orderedlist"><ol type="1"><li><p><a id="id2873540" class="indexterm">Rule of Modularity: Write simple parts connected by clean interfaces.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Clarity: Clarity is better than cleverness.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Composition: Design programs to be connected to other programs.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Separation: Separate policy from mechanism; separate interfaces from engines.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Simplicity: Design for simplicity; add complexity only where you must.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Parsimony: Write a big program only when it is clear by demonstration that nothing else will do.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Transparency: Design for visibility to make inspection and debugging easier.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Robustness: Robustness is the child of transparency and simplicity.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Representation: Fold knowledge into data so program logic can be stupid and robust.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Least Surprise: In interface design, always do the least surprising thing.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Silence: When a program has nothing surprising to say, it should say nothing.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Repair: When you must fail, fail noisily and as soon as possible.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Optimization: Prototype before polishing. Get it working before you optimize it.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Diversity: Distrust all claims for “one true way”.</a></p></li><li><p><a id="id2873540" class="indexterm">Rule of Extensibility: Design for the future, because it will be here sooner than you think.</a></p></li></ol></div><br />The Art of Happiness: A Handbook for Living, by the Dalai Lama. Not a technology book, but it has to be said some of the greatest challenges I've had in my working environment is not the code or the technology, but the people.<br />This book gives a good grounding in principles giving you a greater knowledge of oneself and others - based around the idea that compassion for others is the main source of happiness.<br />Useful for those times when you need to take a deep breath and stand back....Anonymoushttp://www.blogger.com/profile/16449496400214275214noreply@blogger.com3tag:blogger.com,1999:blog-8127306923314168105.post-42242526860944039352008-06-12T18:36:00.007+01:002009-05-17T22:09:46.531+01:00Using the correct caseI think use cases are great, but unfortunately the term 'Use Case' has one meaning if you use them as a working tool, and more often than not, another meaning if used as a buzz-word in conversation.<br /><br />It's a commonly used term by developers, but personally, I often cringe when someone in a management or marketing role uses it to identify a product feature or even just a single scenario - a clear case of use case misuse!<br /><br />Use cases are a bit more technical than just an idea of how something may be used - the term was first coined by Iva in the 80's, and has since been expanded on greatly by the likes of <a href="http://en.wikipedia.org/wiki/Alistair_Cockburn">Alistair Cockburn</a> and his peers. They are a brilliant tool for gathering requirements and consist of nothing more than <span style="font-weight: bold;">simple text descriptions</span>.<br /><br />However, even though a finished use case should be a shining beacon of simplicity and clarity, getting to that point can be devilishly complex and requires more than a little experience in writing use cases - the heuristic 'practice makes perfect' certainly holds true here. The trouble is, once you begin utilising use cases it quickly becomes apparent how useful they are in many more areas of software development than just requirements gathering.<br /><br />A short list of areas where use cases can add value :<br /><ul><li><span style="font-weight: bold;">Requirements analysis</span> - Is the use case describing what you want the system to do?</li><li><span style="font-weight: bold;">Requirement traceability</span> - Justifying the inclusion of a particular piece of design or code.</li><li><span style="font-weight: bold;">Software design </span> - Use cases can lead straight into the design phase, e.g. by using sequence diagrams for each important use case thread.</li><li><span style="font-weight: bold;">Planning and tracking</span> - A set of use cases breaks the software system down into more manageable chunks, which can be planned and progress measured against.</li><li><span style="font-weight: bold;">Test design and writing</span> - A use case is also a ready-made test case.</li><li><span style="font-weight: bold;">Release management</span> - When deciding which features to include or wait for inclusion, use cases provide a mechanism for linking features to user goals.</li><li><span style="font-weight: bold;">Change management</span> - Especially for iterative development, use cases can, for example, aid by scoping change requests to work estimates.</li></ul><br />I won't go into more detail here, as plenty of insightful information is available elsewhere on these topics (See <a href="http://en.wikipedia.org/wiki/Alistair_Cockburn">Alistair Cockburn's work</a>, for example). Suffice to say that you can drive almost the whole software process using a use-case centric approach - not that you should, of course, and the circumstances to which you are fitting a process must always be considered carefully.<br /><br />Once you realise the power of well-written use cases and understand the areas in which you will use them, it seems a good idea to spend more time considering their details and form - unfortunately there is no one-shoe-fits-all approach to this as the structure of a use case is very dependent on the domain and environment in which it is being used.<br /><br />One example of this is for use cases geared towards embedded systems, where I have found the assumption of everything happening in 'zero time' to be very useful - you end a use case scenario whenever you have to wait for something. This approach would be unwieldy when writing a set of high-level use cases for a user interface, on the other hand.<br /><br />One can spend many hours getting a relatively small use case or set of use cases together into a usable form that may appear to be a pedantic waste of time, but to an experienced use case author is time well spent. Many lessons are learnt and dead-ends reached when going through the process of writing and using use cases 'in anger', that time spent early on considering these lessons proves very beneficial in the long run.Anonymoushttp://www.blogger.com/profile/16449496400214275214noreply@blogger.com0