Thursday, 7 March 2013

A bit of Cardiff business advice - Data security and going mobile

I've just come back from a set of two really useful talks, the last of a series of 'Google Wednesdays' workshops aimed at helping businesses cope with the changing demands of the digital world, the opportunity for which was kindly provided by Cardiff Start.

The first talk entitled 'Lock up your valuables' by Wayne Beynon of Capital Law covered data protection and IP issues, who started by pointing out that virtually every business stores personal information in some way - even if it is just CVs of potential employees - the handling of which comes under the auspices of the Data Protection Act, and he continued to explain the obligations of the business in handling the data; The majority of the points were reasonable and fairly obvious, but one that I found had surprising implications regarded storing the data outside of the 'EEA territories' - it's not allowed, which means that if you're using cloud hosting of some type (e.g. google docs) for these data, then you could be technically in breach of the Act.

The next part of Wayne's talk involved handling commercially sensitive information, and the ways by which it can be stolen - there's of course the more obvious external cyber-attack route, and also, (probably more pertinent to Wayne's profession!) theft by employees and business partners. Here he recommended various security methods from restricting access to physical hard-copy, information systems and logging accesses to the more draconian use of computer forensics experts when things really go awry. It was also pressed home the benefits of keeping your employees trained on your data security policies. Finally he finished off with a quick tour of trademarks and copyrights.

Up next was Liam Giles of Spindogs, with a great introduction to the considerations of choosing a mobile app over a mobile-friendly website. The gist of which was that to justify an app, you'll need a good reason to benefit from the extra effort of supporting several different platforms (iOS/Android/etc), the effort of app approvals and less responsive updates. Such reasons could be that you'll need to use some functionality of the phone, such as the GPS, accelerometer or local storage; or that the app paradigm fits better with your chosen marketing path. He then went on to show off how the Spindogs webpage responds to different form factors, and handles gracefully the transformation from big to small by resizing the browser window displaying it (go on, have a go yourself!).

To conclude, Liam presented a plethora of slides provided by Google showing how mobile use is growing for many reasons, and that everyone should pay more attention to their use - 20-25% of users accessing sites via mobile devices should be quite a convincing argument.

And, to show that I was paying attention:
© Mark Einon 2013.

Wednesday, 30 May 2012

Set up your Raspberry Pi like a pro

In my day job, I've developed on several Linux based embedded platforms and without exception, they have all been set up for general development along the same lines - a physical serial connection / terminal and a root filesystem accessed via an exported NFS directory. Also the Raspberry Pi, unlike most embedded systems I have used, has the added bonus of an X server - so you can also export the desktop via VNC. Following these instructions, I've managed to get the physical connections on the Pi board down to just a USB power lead (connected to a PC laptop) and a wifi dongle or ethernet cable, making it far more portable - Play with a RasPi on the train!

I'm using Fedora Linux (16/17) as a host, and I think you should too :) However, the instructions are almost identical for other flavours of Linux (Ubuntu, Debian, Mint etc), and should be nearly the same for Mac. If you're using any Microsoft based OS's, then you'll have to dig a bit deeper elsewhere - or preferably install Linux (hey, it's free!), maybe even use a 'live' Linux distro.

Here's how to get all this running on your Raspberry Pi. Note that all commands are run on the host PC, not the RasPi unless stated otherwise - and you can do each section individually, there's no dependencies between them if you only want an NFS root filesystem, for example.

Debian Squeeze

I've chosen to get this set up initially with the reference Debian Squeeze distro (currently debian6-19-04-2012.zip), available direct from the Raspberry Pi website - RaspberryPi.org/downloads. That way there should be plenty of help available in case trouble arises.
Follow the notes given by eLinux.org to get the Debian image onto an SD memory card, connect up the power/ethernet or wifi dongle/HDMI display and we'll go from there...

Serial Output & Terminal

A serial (aka 'UART') connection isn't strictly necessary to get the board running or playing with a running system, but it's essential if you're going to attempt to use different home-built kernels or anything low level that could possibly break, or cause not to run, the display to any more complicated devices such as a HDMI TV. For just a few £'s from ebay (search for "USB TTL UART" and make sure you get one with a lead that has individual pin connectors, as shown in the pic below), you don't need to think about getting a soldering iron warm. The Debian distro we're using supports serial output / a terminal without changing any code, so all you need to do is:

* Connect the serial dongle between your host and RasPi: Three pins are needed to be connected, GND (Ground), TXD (transmit) and RXD (receive). They should be marked on the USB dongle, and looking at the GPIO pinout diagram here, the GND is connected to P1-06, TXD to P1-08 and RXD to P1-10 - as in the picture below. It might be worth trying to swap TXD and RXD if you initially get no output. The small red USB dongle pictured obviously slots into a USB port on your host PC.


* Start a terminal program on your host: On Fedora, I use minicom (install from the command line (CLI) using 'sudo yum install minicom'). If running for the first time, run with 'sudo minicom -s', select 'Serial port setup' on the menu, then press to change the device to /dev/ttyUSB0 , the press to change 'Hardware flow control' to 'No' and again to exit (don't change anything else). Back at the menu select 'Save setup as dfl' and then 'Exit'. From now on you can start minicom from the CLI with just 'sudo minicom'. Make sure you have the USB dongle plugged in whenever you run minicom, otherwise it will complain about not finding ttyUSB0.

* Power on the Pi as normal: and you should see almost the same text scrolling down the terminal screen as appears on the TV connected via HDMI. Result! - Now give the TV back to the family...


N.B. TODO - Currently on Debian Squeeze, the syslog trace from the init.d scripts isn't shown over UART. This is something that would be good to fix, as we ideally want as much verbose information on UART as possible if something goes wrong.

NFS root filesystem

Any Linux distribution needs a root filesystem to work, and Debian is no exception. At present, this filesystem is sitting on a partition on the SD card inserted into the Raspberry Pi and we want to copy it to the host PC. Why? It's much faster and easier to transfer, edit and view files on the PC than on the Pi (also saving precious Pi processing power), you can back then up (perhaps in a versioning system such as git), and access a larger filesystem space than can fit onto a meagre SD card amongst other reasons. Note that an SD card still needs to be inserted into the Pi, containing the bootloader and Linux kernel code (Getting the kernel to boot over NFS requires a more advanced bootloader such as uboot, a FreeBSD developer has some notes here). To get this running, I had to do the following:

* Copy the Debian Pi root filesystem to the host PC: Make a new directory on the host PC, at the root of the filesystem, e.g. 'sudo mkdir /nfsexports'. Take your SD card containing the RasPi Debian image, and insert it into the host PC. Two new media devices should be mounted automagically, make a note of where by checking the tail end of the CLI command 'dmesg' - mine reports /dev/sdb1 -> /dev/sdb3 as recently found, then run 'df -h':

mark@marke ~]$ df -h
Filesystem                    Size  Used Avail Use% Mounted on
rootfs                         50G   16G   35G  31% /
devtmpfs                      3.9G     0  3.9G   0% /dev
tmpfs                         3.9G  860K  3.9G   1% /dev/shm
/dev/mapper/vg_marke-lv_root   50G   16G   35G  31% /
tmpfs                         3.9G   49M  3.9G   2% /run
tmpfs                         3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                         3.9G     0  3.9G   0% /media
/dev/sda1                     497M  184M  289M  39% /boot
/dev/mapper/vg_marke-lv_home  406G  141G  244G  37% /home
/dev/sdb1                      75M   28M   47M  37% /media/95F5-0D7A
/dev/sdb2                     1.6G  1.2G  298M  81% /media/18c27e44-ad29-4264-9506-c93bb7083f47

 Mine shows device 2 of the 3 (/dev/sdb2) as mounted on /media/18c27e44-ad29-4264-9506-c93bb7083f47/. This long directory is the one for which you'll need to copy the contents into the nfsexports directory created earlier with the CLI command 'sudo cp -ap /media/18c27e44-ad29-4264-9506-c93bb7083f47/* /nfsexports/' - obviously replace the directory names with your own. Make sure you use this CLI version, and not the GUI - the 'sudo' and 'p' options passed to the cp command are important.

* Install an NFS server on the host PC: There are lots of guides on the web for installing and setting up an nfs server, so if you get into trouble, have a google. A quick guide is; On Fedora, run the CLI command 'sudo yum install nfs-utils libnfsidmap'. Once installed, edit the file /etc/exports to export the Pi root directory, for me this is what /etc/exports looks like:

/nfsexports  192.168.2.223(rw,sync,no_root_squash)

Where the IP address 192.168.2.223 is the ip address of my RasPi. There can be wildcards in this address, so you could export it as 192.168.2.* for everything on your local subnet, or even just * for access to everyone (although this is not recommended). To finally export the directory, start the nfs server, 'sudo systemctl start nfs-server.service'. To test, I'd make sure that your host PC firewall and SE-Linux is disabled, and run 'sudo mount -t nfs localhost:/nfsexports' where localdir is a local host PC directory. If successful, you should see the entire root filesystem present in .

* Point the RasPi towards the NFS server: Now we need to tell the RasPi to use our exported rootFS, not the one on the SD card. We can do this by modifying a file located on the SD card. If you have the SD card still inserted into your host PC, run 'df -h' again and take a look at the other mounted partition, /dev/sda1 in my case, which is mounted on /media/95F5-0D7A. So I'm going to edit this file, /media/95F5-0D7A/cmdline.txt, which initially looks like:

dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 root=/dev/mmcblk0p2 rootfstype=ext3 rootwait

And change it to your equivalent of (with no return characters, all one line):

dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 ip=192.168.2.223:192.168.2.225:192.168.2.1:255.255.255.0:rpi:eth0:off root=/dev/nfs nfsroot=192.168.2.225:/nfs4exports,vers=3 rw rootwait

Here, you'll need to change the two instances of '192.168.2.225' to the IP address of your host PC, and '192.168.2.223' to the IP address you want to give your RasPi - make sure that the first three address digits (192.168.2) are the same for both, and also the '192.168.2.1' matches your router IP.

* Fix some annoying issues with Debian: (TODO - fix this!) Further to the above, I've had to fix some issues with the Debian init.d scripts that froze startup from NFS (without reaching the 'login:' prompt). I had to rename the following files present in the exported root NFS in the directory at /nfsexports/etc/rc2.d/ - S14portmap, S15nfs-common, S17ifplugd, S17rsyslog, S17sudo, S18cron and S18ntp. If you find this is an issue for you too, change the 'S' character in front of these files to 'K', e.g. S14portmap to K14portmap. I'd be interested in any feedback anyone has on this issue...

* Boot the RasPi with an NFS root: Now just unmount the SD card ,and reinsert into the RasPi before powering on again. Watch as the RasPi boots to a prompt from NFS (possibly...!).

Added bonus - export a desktop via VNC

The final step to totally ditching the TV in favour of your PC/laptop screen is to export the raspi desktop via VNC to view and control on the host PC:

* Install a vncserver on the RasPi: Now on the fresh RasPi prompt before you, (not the host!), login (username pi, password raspberry) and install tightvncserver with 'sudo apt-get install tightvncserver'. Enter a VNC password when prompted, but don't bother with a view-only password. To run the server so that you can log into it remotely, type 'vncserver :1 -name RasPi -depth 16 -geometry 1680x1050'. Feel free to play with the geometry or pixeldepth, but bear in mind bigger numbers may mean slower performance.

* Look at the RasPi desktop on your host PC: Fedora should have a ready-installed remote desktop viewer, imaginatively named ' Remote Desktop Viewer'. Find it in your applications list and start it. Select the menu item Remote->Connect and change the protocol to VNC, and add your RasPi IP address and xserver ID in the Host: box (for me, this was 192.168.2.223:1 :


...then click 'Connect' at the bottom of the dialog box. You should be then prompted for the VNC password you chose for the RasPi earlier. Enter it, and bask in the glory of the Raspberry Pi desktop:


* Happy hacking!



Monday, 30 April 2012

Calm that clicking laptop hard drive in Fedora

Every time I install a Fedora distro on a laptop, I have to stop the hard drive clicking every few seconds or so by fixing the over-aggressive Advanced power Management (APM) setting.

To provide a permanent fix, the setting needs to be applied every time it boots. Here's how to fix it once and forget about it:
  • Edit a new file /etc/rc.d/init.d/hdparm, with the contents (where /dev/sda is the noisy HDD of your choice)
#!/bin/bash
/sbin/hdparm -B 254 /dev/sda
  • Make sure this file has the correct permissions to be run:
$ sudo chmod 755 hdparm
  • Now create a link for the rc level for the hdparm script to run at startup:
$ cd /etc/rc.d/rc3.d/
$ sudo ln -s ../init.d/hdparm S80hdparm

Now on reboot, the hard drive should be much quieter, won't wear out as quickly and will allow you to get on with your work...but it will eat a little more power than before. You can change the hdparm -B xxx number to something a little lower if power use is an issue. To check that the changes are successful, run:

$ sudo hdparm -B /dev/sda
/dev/sda:
 APM_level = 254

Tuesday, 5 July 2011

Blanket bomb a C source file with function entry/exit trace

'Blanket bomb' a file with trace for function entry / exit, from vim:

:%s/^{/{\r\ \ \ \ printf(">>>%s:%d\\n", __FUNCTION__,__LINE__);\r/
:%s/\(\ *\)\(return.*\n\)/\1printf("<<<%s:%d\\n", __FUNCTION__,__LINE__);\r\1\2/

or just using sed from the command line:

sed 's/^{/{\n\ \ \ \ printf(">>>%s:%d\\n", __FUNCTION__,__LINE__);\n/' Filename.c > Filename.c
sed 's/\(\ *\)\(return.*\n\)/\1printf("<<<%s:%d\\n", __FUNCTION__,__LINE__);\n\1\2/' Filename.c > Filename.c

Sunday, 19 June 2011

How to use a 64bit machine to build & install a 32bit Linux kernel for another PC

I've often had the problem of having to build a Linux kernel for a slow machine, while having a much faster machine available. In my case, the slow machine is an Asus EEE-PC running 32bit Fedora, and my faster machine is an Intel i7 running 64bit Fedora.

In order for the faster 64bit machine to build a kernel and install it on the slower 32bit, you'll need to bear in mind three issues:

* How to build a 32bit kernel on a 64bit machine
* How to give access to the slower machine for installing the kernel
* How to actually install the kernel on the slower machine.


How to build a 32bit kernel on a 64bit machine

Assuming you already have kernel source and a suitable .config available on the faster machine, simply run your usual kernel make, with the addition of ARCH=x86 and with menuconfig as a target. In my case, this would be:

fast$: make ARCH=x86 -j16 O=/home/mark/Source/build-linux-2.6-32bit/ menuconfig

Then deselect the top option "[ ] 64-bit kernel" before exiting and saving. Now you can build the kernel with:

fast$: make ARCH=x86 -j16 O=/home/mark/Source/build-linux-2.6-32bit/

Note that the '-j' option gives the number of simultaneous jobs the make can handle, which is usually 2x the number of processors/cores in your system. I have 8 cores, so I use them all by specifying '-j16'.

The O= specifies the build directory where the build objects are placed. The .config you wish to use should go in here. There are many reasons why it is a good idea to have separate build and source dirs.


How to give access to the slower machine for installing the kernel

You could also use a Samba share for this, but I prefer using NFS. Note that this isn't the most secure method, using NFS in this way, but it works.

On the remote, slower, machine with NFS installed, export your root (/) and boot (/boot) directories by editing the /etc/exports file or using the system-config-nfs program. My /etc/exports file looks like this:

/boot *(rw,sync,no_root_squash)
/     *(rw,sync,no_root_squash)

Then restart the NFS services nfs. nfslock and rpcbind (portmap on older systems):


slow$: sudo service rpcbind restart
slow$: sudo service nfslock restart
slow$: sudo service nfs restart

Now it should be possible to remotely mount these exports on the faster machine, replace <remote-ip> by the name or IP address of the remote PC:

fast$: mkdir ~/remote-root
fast$: mkdir ~/remote-boot
fast$: sudo mount -t nfs <remote-ip>:/ ~/remote-root
fast$: sudo mount -t nfs <remote-ip>:/boot ~/remote-boot


Installing the kernel on the slower machine

Almost there, we just need to install the kernel image on the slower machine and reboot to run it!
For this to work seamlessly, a small patch is needed to the /sbin/installkernel script (on my Fedora machine, at least):


--- /sbin/installkernel
+++ /sbin/installkernel
@@ -32,7 +32,10 @@
cfgLoader=1
fi
-LINK_PATH=/boot
+if [ -z "$LINK_PATH" ]; then
+ LINK_PATH=/boot
+fi
+
RELATIVE_PATH=`echo "$INSTALL_PATH/" | sed "s|^$LINK_PATH/||"`
KERNEL_VERSION=$1
BOOTIMAGE=$2

Now we can export some environment variables to tell the kernel make install scripts where we would like our kernel and modules to be installed (Note the difference between remote-root and remote-boot!), while logged in as the superuser:

fast#: export INSTALL_PATH=/home/mark/remote-boot/
fast#: export LINK_PATH=/home/mark/remote-boot/
fast#: export INSTALL_MOD_PATH=/home/mark/remote-root/

For a Samba export, just replace the exported variables above with their equivalents (usually ~/.gvfs/root\ on\ remote/ or similar for Fedora).
Now we can simply run the install make targets on our fast machine, which installs to our slower, 32bit target PC:

fast#: make ARCH=x86 O=/home/mark/Source/build-linux-2.6-32bit/ modules_install install

Then edit your grub.conf (or equivalent, as I haven't yet found out why /sbin/new-kernel-pkg doesn't update this automagically on the remote machine).

Reboot, select the new kernel from the GRUB menu (or equivalent), and happy hacking!

Thursday, 19 February 2009

Removing trailing whitespace & tabs, at the end of lines in source code files.

I always edit files in Vim with highlighting of tabs and spaces at the end of lines, in case I catch myself putting them in, by adding these lines to my ~/.vimrc :

  let c_space_errors=1
  highlight WhitespaceEOL ctermbg=red guibg=red
  match WhitespaceEOL /[\s\t]+$/

But occasionally, I encounter a file that is caked in red squares and lines at the end of most lines, and a neat way of removing these in one go is to run the following command on the file (entered in command mode, after the ':' prompt:

  %s/[\ \t]*$//

Even more rarely, I get an entire directory of such files and need to blatt all the trailing whitespace. I either use this line directly from the command line (bash), or place it in a quick script to do the job:

sed 's/[\ \t]*$//' file1.c > file2.c

Monday, 5 January 2009

Keeping it all in mind

Back in the 50's, the psychologist George A. Miller wrote a paper entitled 'The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information'. While quite a dry and involved read, it's findings (and other corroborating evidence) are of great significance to anyone creating a software design, architecture, process, checklist, API or code listing. In fact, anything where a large amount of items or concepts can be grouped or categorised may benefit from its findings, namely that:

There is a limit to the number of items we can keep in short-term memory - 7 plus or minus 2.

While not applicable to basic lists - where the reader doesn't simultaneously need to be aware of every item in the list - it is a powerful heuristic when applied to, for example, a software architecture layer. In this case a person trying to understand the architecture would benefit from awareness of all parts of the architecture at the level being studied, and in doing so finds the architecture more easily discoverable and transparent than if the layer were more finely partitioned.

Another great example comes in the form of a process description or checklist, especially where safety is critical. Here it is important not to miss any steps or checks, and the chance of doing so is reduced if the number of steps are kept to a manageable amount.

It's difficult to list everything to which this heuristic can be applied - there are always opportunities available if you can spot them. With practice, it can become second nature - like many other design heuristics (The appendix of 'The Art of Systems Architecting' by Maier and Rechtin is a great source for such things).

If you want some more examples, just take a look at some of the other posts in this blog... :o), but also next time you're wading through a mass of code, design information or an architecture description and finding it difficult then bear in mind how it is organised, is the author trying to make you 'bite off more than you can chew'?

This applies even more so if you are designing or developing. Your own software (especially open source) could benefit greatly if it's more easily discoverable, understandable and therefore maintainable by others who may not be so familiar with its workings.