This does not make the LTSP client mobile and it still requires wired network connection to work. Changing the image not to require a wired connection is possible, though.
Grub and image can be installed e.g. with a live CD or a live USB stick or even by first PXE booting the device and then using the LTSP client itself to do the installation.
The testing here was done with Ubuntu 13.10 live CD and the internal harddrive was completely formatted using traditional DOS partition table. If you are using Ubuntu to do this, Ubuntu 12.10 or newer is required as Grub version in Ubuntu 12.04 is too old for loading kernel and initramfs image from loopback mounts.
First we need to create a partition that holds the image. The partition can be either normal partition or LVM partition. You can use e.g. fdisk or cfdisk to create the partitions.
A single partition is enough, but if you want to also enable local swap, it might be a good idea to create a second swap partition.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
After creating the partition you need to format it and mount it somewhere (here /mnt/disk):
1 2 3 4 |
|
To get Grub installed on the disk, we need call grub-install command, that creates boot directory under the specified root directory path:
1
|
|
There should be now files under /mnt/disk/boot/.
Now you need to somehow copy the i386.img file from the LTSP server to /mnt/disk/.
Next we need a custom grub.cfg file that tells Grub how to load the kernel and initrd.img from inside the LTSP image. Below is a basic example that does that. Replace (hd0,1) with the partition you are using. The syntax is specified here: http://www.gnu.org/software/grub/manual/grub.html#Device-syntax
/mnt/disk/boot/grub/grub.cfg
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
|
That’s it. Unmount /mnt/disk and reboot from the local harddrive. If you want to update the image, you can copy new i386.img to the harddrive. No other changes are needed.
1
|
|
We are now managing IT systems for some 170 schools in tens of cities around Finland. The schools have close to 8000 computers in total. The number of laptops is still relatively small, but approaching 1000. All of this is now managed with Puavo.
Puavo has been built over the years to serve our use case - lots of nearly identical centrally managed systems that still need separate user directories and storage areas. Every city has its own user directory and device inventory and every school has local teacher admins that do some of the tasks using Puavo. We do not run separate instances of everything for every city or school, but all the management tools are hosted in one place. Only the functions that need to be within school LAN are there.
There’s more about Puavo’s organisation model in an earlier post.
Puavo started as a user management and a simple inventory tool to help us serve teachers and pupils better. The first versions could only handle netbooting LTSP thin and fat clients and lot of management was done with other tools. Now Puavo is a set of tools that both store data and provide APIs that can be used by tools that integrate to the desktop.
Our current model is to use a single read-only desktop image file that can be used on all different devices - netbooting thin and fat clients, terminal servers and laptops. The image supports a huge variety of hardware, thanks to Ubuntu and linux in general. The image file is always stored as a file and it is not unpacked on laptops either. The image is a read-only image which means that users cannot install anything on the devices themselves. Adding new applications requires more testing, but this testing then benefits all of our users at once. Home directories are available normally for users to store their data. Web based services are also taking away the need for locally installed applications.
Puavo itself does not mandate image based installations and it can be used to manage also normal desktop installations.
We have been running our systems using LTSP thin and fat clients for quite a few years. When we started almost 10 years ago it was all thin clients. At first we were running only really old hardware. Nowadays I wonder how those machines with 64MB could actually be used as thin clients. There was no fat client support in LTSP, so we deployed some locally installed Ubuntu workstations on the side. We really didn’t have tools to manage anything in large scale. Some LDAP directories were installed locally and lts.conf files were modified when needed. This was enough to handle a few hundred thin clients and a few dozen workstations.
After some years we started deploying fat clients that replaced the locally installed workstations. The number of fat clients grew slowly and we built our first management tools to manage everything. Around this time we started experimenting with laptop management tools for the first time. LTSP makes everything super simple as there’s just one image and server to update, but with laptops that was not possible. We learnt about Puppet that automates a lot of administration tasks and wrote our desktop rules with it. LTSP servers, fat client image and laptops all used the same desktop configuration. Puppet handled server and laptop package and configuration updates.
Puppet worked quite well, but over time the configurations changed quite a bit and as new packages were installed, some old ones were removed. As we kept installing new laptops and servers in small batches, there were minor differences between old and new installations. We had all kinds of hardware - super fast new laptops, old laptops, new underpowered netbooks. On some of the machines running Puppet was way too heavy and users often shut down the machines when updates were running. This led to occasional boot failures as kernel packages were corrupted. Supporting these was not really our dream come true. Sometimes the laptops were not used for a month and when they were all started at once, Puppet would crash the school’s wireless network when downloading the updates.
Many wireless networks in schools are built to minimize the number of access points. When the access point is behind two walls and 25 laptops are used at once, nothing works. We thought hard about the problem and came up with an idea of turning the teacher workstations into access points. As every classroom already has a workstation for the teacher and there’s a wired connection to it, why not just share the connection? So we installed wlan dongles in the workstations and installed access point software on the workstations. The wireless traffic is tunneled to the server that can then manage it separately from the rest of the traffic. Puavo is used to manage the wireless network configuration.
Over time we got over the problems and laptops became stable. But there was a problem - all the wired workstations booted from the network and used a single image that we could test easily, but laptops would require additional work for testing as we needed to test a lot of migration scenarios. We thought that maybe it would be possible to run the same netboot image on laptops and it indeed was. We had to do some changes to the normal LTSP code, but now we have tools in place to support all this. We release now a single weekly image that runs all our desktops. When laptop is installed, it is booted once using PXE and registered in Puavo’s hardward inventory like any other device. It then copies the image over the network in a few minutes and installs GRUB. GRUB knows how to load kernel and initrd from the image, so there’s just the single squashfs image file on the harddrive. Changing the image changes the operating system at once. Installing a laptop doesn’t take more than 5 minutes, so opening the retail packaging takes already more time than the software installation.
Updating laptops is still a bit different from netboot devices, but now it is all automatic and atomic. We have a background process running on laptops to download the new image when the laptops are in the school network. When the image file is changed in Puavo, the laptop gets the new image name in its configuration. If a diff is available from the boot server, it loads it and a new image is created using the diff and the old image. When the new image is ready, it is placed under images partition. There is no GRUB configuration to update as it dynamically detects the image when the system boots. In this model all updates mean rebooting, but this means that we can also rollback to previous version easily if there are any problems. Updating to new distribution version is also only a matter of image change, no lengthy upgrade process is needed.
If you know how LTSP servers work, you are probably wondering how we manage LTSP servers. They are also netbooting the same image. LTSP-pnp uses different tricks to create the fat client image from the server, but we are now creating a single image that boots also as an LTSP server. The image is built using Puppet rules so it is a repeatable process and building the 7GB image takes now some 15-20 minutes on a beefy server. At school there’s just one boot server that has a “normal” Ubuntu server installation. Everything else runs from it.
So now we have the netbooting teachers' workstations providing a wireless network for the laptops that use the same image as everything else. One can install it pretty much on anything and all the needed configuration is stored in Puavo. One can configure everything centrally in one place and the boot servers have a slave LDAP directory with all the configuration data. Boot servers do not need the management tools installed which makes installation and management easy.
If you want to learn more, it’s all on Github. Puavo started as an internal tool and there are no other users for it at the moment, but we’d be more than happy to help if you want to give it a try. There is also Ubuntu packaging available at archive.opinsys.fi for the brave.
Veli-Matti Lintu
]]>This new Labs site contains news and updates about our development efforts. Expect posts about Linux and Web technologies. Checkout the archives to see what we have been previously posting.
While making sure that our experience with Octopress would be smooth as possible I integrated Guard::LiveReload with Octopress. Since I could not find any proper documentation on the Web how to do it I’m posting it here.
LiveReload is a OS X tool and a Chrome extension which frees developers from having to reload manually the pages they are developing by monitoring the file system and automatically reloading the pages. Guard::Reload is a CLI version of it written in Ruby.
While there already blog posts about on how to use Guard::LiveReload with
Octopress I was unable to use them with our Octopress installation. For example
this blog post presents the most obvious way of integrating
it to Octopress and it fails with large blogs. It tries to monitor files in the
public/
directory and reloads immediately it sees a change. This is where it
fails. Generating our blog takes from 7 seconds up to 30 depending on the
machine. That means Guard reloads the page way too early.
One could try to workaround it by setting the --latency
option to Guard which
delays the task execution, but that’s pretty much a guess work. Real solution
would be to implement some kind of debounce feature to Guard which delays the
task execution until the events stop arriving. Since I’m not a Guard hacker I
ended up creating a Jekyll plugin to workaround this.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
It monkeypatches Jekyll to hook into the build process and writes a file to
public/generated
when the whole build operation has been completed. Now we
can monitor just this file in our Guardfile and the reload happens exactly when
it is supposed to!
1 2 3 4 |
|
Because Octopress (or Jekyll?) uses different thread for compiling stylesheets we want to add separate watcher for it too.
To actually use this we need to add guard
, guard-livereload
and rb-notify
gems to the Gemfile and install the LiveReload Chrome
extension.
1 2 3 4 5 6 |
|
Then just install them with bundle install
and start Guard with bundle exec
guard
.
Now activate the LiveReload plugin from Chrome, start Octopress in preview mode
with bundle exec rake preview
and in a second terminal start guard with
bundle exec guard
.
Happy hacking!
That’s not a smooth developer experience! Not even with LiveReload.
Unfortunately it’s not trivial to speed up Jekyll build process. Generating
dozens of pages just takes time. My solution to this is just remove all the
pages I’m not currently interested in with rm source/_posts/*
and restore
them with git after I’m done.
1
|
|
As we and many others are relying more and more on the Open Web Platform or HTML5 as more commonly referenced to. We want to bring some of that goodness to the our desktops too. With it we can utilize our current in house talent and existing code much more efficiently.
We decided to take shot at the new project called AppJS and try reimplementing Gnome’s good old menus with HTML5. Here we will take look on how AppJS delivered.
Originally we planned to use Python, Qt and Webkit to build it like we did with Iivari. But some research showed that the best tool for the job might be The Chromium Embedded Framework which allows developers to leverage all the hard work Google and the Chrome team has done on Webkit instead of just using for example the Webkit widget from Qt. Wikipedia has a pretty convincing list applications using it which lead us to the AppJS.
I think AppJS is best described as the bastard child of Chrome and node.js. In practice it is a node.js module which allows developers to create small browsers with just JavaScript.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
This would open up a chromeless browser window using a index.html from the content directory. This is just the beginning of course. Developers can communicate with browser code using DOM events.
Send event from node.js using dispatchEvent
:
1
|
|
and receive it in the browser like any other DOM event with addEventListener
:
1 2 3 |
|
It’s brilliant how it reuses APIs already known to HTML5 developers.
Now this is where things get really exciting or confusing depending on how you look at it.
When using the current master branch (e2ef58b872) of AppJS it exposes
the node.js require function directly in the browser context. With the latest
release, currently 0.0.19, you can do that manually. This
means you can actually do var fs = require("fs");
and
fs.readFile("./file.txt", ...);
directly in the browser!
While this is quite impressive on a technology stand point I’m not too big fan of it. Why? Because this is very AppJS specific. The reason why HTML5 is so popular is because it runs just about everywhere. Writing apps with code like that won’t be able to run anywhere else but on AppJS. Using only the events for node.js api access would force developers to write operating system level code in abstract manner. Then it would be really easy to move the code to a real browser and just reimplement the event handlers.
I also have a more practical reason for not enjoying it too much. While this is
a really easy way to give module system to client-side code which it really
needs - it breaks others. We already use RequireJS for our client-side code and
it just does not work with AppJS if there is already a require function in the
global scope. Another practical issue is that some libraries which try detect
whether they are in the browser or in node.js get confused by this. Luckily the
AppJS developers are really responsive on the #appjs
channel on freenode and
they started looking for solutions to this after I mentioned this issue. For
now we solved this by forking AppJS and adding an option to
disable it.
So back to reimplementing Gnome menus. The target for this first iteration was just to reimplement the existing features of the menu. Nothing too fancy. It reads menu content from a json file and builds a menu from it. In future we plan to integrate some communication channels from schools back to us at Opinsys and a remote update mechanism for the menu content so that we and our customers can update the menu content without rebuilding the LTSP Ubuntu images.
If you want try it feel free to grab a deb package from the Webmenu Github
repository’s downloads section. The package is bit hackish.
It installs Webmenu to /opt/webmenu
and adds couple executables to PATH
.
The package contains all the dependencies which cannot be found from the
default Ubuntu repositories. That means it contains latest version of node.js,
AppJS and bunch of other random node.js modules. But hey it should be easy to
install at least.
After installation see usage.md on Github and menujson.md if you want to customize the menu content.
Webmenu itself is licensed under GPLv2+.
Overall I’m quite pleased with AppJS although it has some serious bugs, but that’s expected for project at this stage.
It’s nice to be able to use existing HTML5 and node.js skills and libraries to build desktop apps. I hope that AppJS developers realize that this is the most compelling feature of AppJS and work out the quirks with the require and do not go too crazy with AppJS specific features. One good example of this is that developers can use some existing web frameworks from node.js with AppJS. I hope they keep working on this aspect too. I’d love to be able to use various connect middleware extensions for precompilers and such.
]]>tl;dr use Linux 3.6 or later
Those who are still reading this, this story describes how we debugged and got solved initially mysterious problem of oopsing kernels. It is written hoping others would find it useful and we would remember how to do something similar next time.
The story started already a year ago when we got first reports from schools about freezing computers. Soon we found out that the freezing computers had SMART Boards (interactive whiteboards made by SMART Technologies) connected to them through USB.
The software that comes with SMART Boards (Notebook and drivers) is proprietary and no source code is available. Because there was no source code to look at, Veli-Matti did some debugging with strace and patched libusb back then to find out what the driver binary was actually doing. It turned out that it was frantically scanning the usb bus, connecting to the board, writing, polling and finally closing the connection. And it looping this over and over multiple times per second.
Luckily searching revealed that “a lsusb bomb” is a good way to trigger all kinds of usb problems. After we realised that USB2 support has been problematic with SMART Boards, we realised that by disabling USB2 support the problem went away. Our problem was not the same as this article in the knowledge base, but it helped us:
http://smarttech.com/us/Support/Browse+Support/Support+Documents/KB1/131884.aspx
Later we found out that we were not the only ones having kernel problems with SMART Boards:
http://lkml.indiana.edu/hypermail/linux/kernel/1112.1/00007.html
After we disabled USB2 support in the problematic machines, life was good again for some time. We reported the problem and waited for a fix that never came. It was time for a real fix.
Develop a reliable method for reproducing the issue with specific hardware, i.e. in our labs.
Develop a reliable method for reproducing the issue with generic hardware, so that others (e.g. kernel developers) could reproduce it.
Identify the root cause and pinpoint the exact code section.
Gather as much information as possible and report findings to kernel developers.
Optionally develop a patch which fixes the issue and send it upstream.
So, it was starting to be quite obvious, that the system instability was somehow caused by SMARTBoards and/or the proprietary software controlling them. The next step was to capture the oops to get a bit of a solid ground for studying the problem further.
The easiest way to capture the oops is via serial console. See Veli-Matti’s earlier post on how to do it in LTSP, but the basic setup is this:
console=ttyS0,115200n81
to your kernel’s boot parameters.And this is what we got:
[ 182.520363] BUG: unable to handle kernel paging request at 00efe000
[ 182.526733] IP: [<c11381d1>] __kmalloc+0xb1/0x1f0
[ 182.531505] *pdpt = 000000002ed88001 *pde = 0000000000000000
[ 182.537339] Oops: 0000 [#1] SMP
[ 182.540628] Modules linked in: nls_utf8 isofs snd_hda_codec_hdmi snd_hda_codec_realtek bnep rfcomm bluetooth parport_pc ppdev snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi snd_rawmidi snd_seq_midi_event i915 snd_seq drm_kms_helper drm snd_timer snd_seq_device snd coretemp dcdbas hid_generic microcode soundcore i2c_algo_bit snd_page_alloc psmouse mac_hid serio_raw mei video lp parport usbhid hid e1000e
[ 182.578242]
[ 182.579752] Pid: 4482, comm: usboops Not tainted 3.5.4-opinsys #2 Dell Inc. OptiPlex 790/0D28YY
[ 182.588589] EIP: 0060:[<c11381d1>] EFLAGS: 00210206 CPU: 0
[ 182.594138] EIP is at __kmalloc+0xb1/0x1f0
[ 182.598280] EAX: 00000000 EBX: 00efe000 ECX: 0001e0a7 EDX: 0001e0a6
[ 182.604617] ESI: f3002200 EDI: 00efe000 EBP: eecffe80 ESP: eecffe4c
[ 182.610951] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
[ 182.616409] CR0: 8005003b CR2: 00efe000 CR3: 36bf5000 CR4: 000407f0
[ 182.622744] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 182.629078] DR6: ffff0ff0 DR7: 00000400
[ 182.632956] Process usboops (pid: 4482, ti=eecfe000 task=f18cb280 task.ti=eecfe000)
[ 182.640695] Stack:
[ 182.642729] 000220af c1421199 000000c0 00efe000 0001e0a6 00022001 c142b66a 000000d0
[ 182.650666] 00000004 0001e0a7 f690f660 f19e4900 f3200a00 eecfff10 c142b66a 00000004
[ 182.658606] 00000000 00000000 00000000 00000004 0009a45d eedeb0c0 eecffefc f690f670
[ 182.666541] Call Trace:
[ 182.669015] [<c1421199>] ? usb_alloc_urb+0x19/0x40
[ 182.673950] [<c142b66a>] ? usbdev_do_ioctl+0x165a/0x1c10
[ 182.679410] [<c142b66a>] usbdev_do_ioctl+0x165a/0x1c10
[ 182.684697] [<c142bc20>] ? usbdev_do_ioctl+0x1c10/0x1c10
[ 182.690166] [<c142bc2d>] usbdev_ioctl+0xd/0x10
[ 182.694749] [<c1157aa2>] do_vfs_ioctl+0x82/0x5b0
[ 182.699506] [<c108ef6b>] ? ktime_get_ts+0xeb/0x120
[ 182.704441] [<c12bf8f0>] ? copy_to_user+0x40/0x60
[ 182.709287] [<c115869a>] ? poll_select_copy_remaining+0xca/0x110
[ 182.717100] [<c108f0f4>] ? getnstimeofday+0x54/0x120
[ 182.723854] [<c115803f>] sys_ioctl+0x6f/0x80
[ 182.729890] [<c104b5a2>] ? sys_gettimeofday+0x32/0x70
[ 182.736743] [<c15a619f>] sysenter_do_call+0x12/0x28
[ 182.743394] Code: 01 00 00 8b 06 64 03 05 e4 7f 90 c1 8b 50 04 8b 18 85 db 89 5d d8 0f 84 28 01 00 00 8b 7d d8 8d 4a 01 8b 46 14 89 4d f0 89 55 dc <8b> 04 07 89 45 e0 89 c3 89 f8 8b 3e 64 0f c7 0f 0f 94 c0 84 c0
[ 182.765014] EIP: [<c11381d1>] __kmalloc+0xb1/0x1f0 SS:ESP 0068:eecffe4c
[ 182.772621] CR2: 0000000000efe000
[ 182.786426] ---[ end trace 8c5bb233276a5431 ]---
But the problem was that it was quite difficult to reproduce. Oopsing
was certain, but the time window varied a lot. Sometimes it happened
on freshly booted systems almost immediately when SMARTBoard was used
and sometimes it took more than a day. Without an easily reproducible
method, determining whether a newer kernel would fix the problem is
almost impossible. We had to come up with a method which would make
the kernel oops reliably and instantly. Enter lsusb_bomb.sh
:
#!/bin/sh
set -eu
lsusb_loop() {
for _ in $(seq 1 10000)
do
lsusb -v >/dev/null 2>&1
done
}
for _ in $(seq 1 15)
do
lsusb_loop&
done
Veli-Matti’s previous debugging work on this issue had revealed that
bombing the USB bus with numerous simultaneous lsusb-processes made
the whole system really unstable. And confirmed that the bug still
existed in newer kernels. Running lsusb_bomb.sh
on a system which
had SMARTBoard plugged in and proprietary SMARTBoard software running
caused the kernel to oops in a couple of minutes. The first goal of
the study was achieved.
At this point we had a way to crash kernel from userspace when SMARTBoard was connected to USB bus and proprietary SMART Board processes were running. But to be able to get help from the kernel community, we had to come up with more generic way to reproduce the issue. Sending kernel mailing list a bug report describing a problem which occurs only when a special device is being controlled by a proprietary software did not sound a good idea back then and it does not sound good now either. If you are able to show kernel developers how to easily reproduce the bug you are reporting, you are more likely to get feedback.
Because we were able to crash the kernel from userspace, we knew we
should be able to reproduce the problem reliably with any hardware,
given just the right conditions. We deciced to spy the proprietary
SMARTBoard driver process by wiretapping the communication line
between the process and the kernel. Sounds thrilling, but it was
actually the most boring part of the debugging process. We just added
several syslog()
calls to different places inside libusb and
replaced the libusb object the SMART Board Service was using with our
own tracing version of it.
Based on the captured trace, we wrote a short program, usboops.c
,
which was emulating the proprietary SMART Board driver well enough
(many communication details were ignored, because they were not
relevant from the crashing point of view) with an execption that it
was designed to communicate with an ordinary USB mouse (actually any
USB HID device with an interrupt IN enpoint could be used):
#include <usb.h>
/* Check `lsusb -v' for correct values on your machine. An USB mouse is * a good choice. */
static const int ID_VENDOR = 0x093a;
static const int ID_PRODUCT = 0x2510;
static const int B_INTERFACE_NUMBER = 0;
static const int B_ENDPOINT_ADDRESS = 0x81; /* Any interrupt IN endpoint. */
static const int W_MAX_PACKET_SIZE = 4;
int main(void)
{
struct usb_bus *bus;
struct usb_device *device;
usb_dev_handle *dev_handle = NULL;
usb_init();
usb_find_busses();
usb_find_devices();
for (bus = usb_get_busses(); bus; bus = bus->next) {
for (device = bus->devices; device; device = device->next) {
if (device->descriptor.idVendor == ID_VENDOR
&& device->descriptor.idProduct == ID_PRODUCT) {
dev_handle = usb_open(device);
break;
}
}
}
usb_detach_kernel_driver_np(dev_handle, B_INTERFACE_NUMBER);
usb_claim_interface(dev_handle, B_INTERFACE_NUMBER);
while (1) {
char buf[W_MAX_PACKET_SIZE];
usb_interrupt_read(dev_handle, B_ENDPOINT_ADDRESS,
buf, sizeof(buf), 1);
}
return 0;
}
And usboops.c
proved to be extremely efficient. We managed to crash
all test systems in matter of seconds. We found out that v3.6 did not
suffer from the problem but all stable releases were crashing. Only
requirements were an USB mouse and usboops.c
. The second goal of our
study was achieved.
We were getting closer to the solution. Based on our tests, it seemed that v3.6 (a0d271cbfed1dd50278c6b06bead3d00ba0a88f9) was not suffering from the issue, but v3.5.4 (d61ed4631511b08d2e14924eab16a9ddaed44df6) definitely was. Good news was that v3.6 seemed to fix the problem, but we had to pinpoint the commit which seemed to fix the problem, because:
None of our production systems were using Linux 3.6 at that time and backporting newer kernel to our systems would have broken many things. If we could pinpoint the commit which is the first to fix the problem, then we might be able to extract the fix and backport it to older stable releases.
It was not 100% sure that v3.6 fixed the problem. It might have been
that our test case (usboops.c
+ mouse) was just failing.
Bad news was there was 11130 commits separating v3.5.4 and v3.6. Luckily, there’s a magnificient tool for hunting bugs/fixes: git bisect. Bisecting, compiling and testing -loop took only less than 4 hours (log2(11130) < 14), and lead us to a conclusion that the following commit would be The Fix:
commit df2022553dd8d34d49e16c19d851ea619438f0ef
Author: Alan Stern <stern@rowland.harvard.edu>
Date: Wed Jul 11 11:22:26 2012 -0400
USB: EHCI: use hrtimer for interrupt QH unlink
This patch (as1577) adds hrtimer support for unlinking interrupt QHs
in ehci-hcd. The current code relies on a fixed delay of either 2 or
55 us, which is not always adequate and in any case is totally bogus.
Thanks to internal caching, the EHCI hardware may continue to access
an interrupt QH for more than a millisecond after it has been unlinked.
In fact, the EHCI spec doesn't say how long to wait before using an
unlinked interrupt QH. The patch sets the delay to 9 microframes
minimum, which ought to be adequate.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We had achieved the third goal.
Now we were ready to report our finding to the kernel community. We had an easily reproducible bug. Check. We had bisected the history and found a commit which seemed to fix the bug but which was not part of any stable releases. Check. The bug hadn’t been reported earlier by anyone else. Check. Then, writing and reporting the bug was just a matter of being accurate and following instructions.
And the result is here: http://www.spinics.net/lists/linux-usb/msg71648.html
We got the confirmation we were asking for only 41 minutes and 15 seconds later. Crazy.
The fourth goal was achieved.
We never had to try achieving the fifth goal. Alan had done it for us. And to be honest, we were lucky. It probably would have been to big bite to chew at this point, but the adventure was fruitful, interesting and educational. Our first fixed kernel bug is still lying somewhere in the future, waiting for us to smash it with our virtual flyswatter. But, Bug, where ever you are, beware! We are coming after you.
]]>We run a lot of LTSP thin/fat clients in schools that have some exotic hardware. Some time ago we learnt that some hardware combinations were more prone for crashes than others. After quite a bit of debugging it turned out that the kernel had a bug in USB stack that was triggered by SMART Board drivers. Finding the bug was an interesting exercise itself, but in the process of getting the fix out we needed to update to kernel 3.6 that had a fix in place.
LTSP chroots require aufs/overlayfs support in kernel to work so until now we have been using Ubuntu kernels that have the required patches. 3.6 is not packaged for Ubuntu yet with aufs/overlayfs patches, so we needed to get a vanilla kernel compiled with overlayfs. Compiling the kernel itself is not difficult, but the whole process needed to get it packaged the Debian way was an exercise we wanted to document for others too.
This document describes how to build and package vanilla kernel.org kernels enhanced third-party patches and configured with certain Ubuntu kernel settings and managing the whole thing cleanly in Git. More specifically, this document describes how to
Prepare your working environment for kernel compilation and packaging,
patch v3.6 kernel with OverlayFS,
compile the patched kernel with Ubuntu Quantal’s configuration,
and package the whole thing cleanly in a Debian package to make it ready for distribution.
OverlayFS is required to have a writeable filesystem layer on top of a read-only filesystem layer. In LTSP, the client image is mounted as the base layer and then a writeable layer implemented as a tmpfs mount on top of that. Hence, all modifications to the writeable layer are non-persistent.
This document assumes the reader has basic knowledge on following topics:
Linux command line tools
Git
Debian packaging
Vanilla kernel
A kernel obtained from Torvalds’s Git tree.
Stable kernel
A kernel obtained from the stable Git tree. Stable kernels are based on latest kernel releases made by Torvalds in his tree and then having various patches applied on top of that.
Ubuntu kernel
A kernel obtained from Ubuntu’s release-specific Git tree, for example Quantal’s Git tree. All Ubuntu kernels are based on Stable kernels with various patches (e.g. Ubuntu-specific additions, packaging, cherries picked from elsewhere) applied on top of the stable surface.
Tell Debian packaging tools who you are:
$ DEBFULLNAME="John Doe"
$ DEBEMAIL="john@doe.tld"
This data is used in debian/changelog
and is mandatory part of the
changelog entry. If omitted, packaging tools will come up with
something anyways, so it’s better to set it right, once and for all. I
suggest you to add those to your ~/.bashrc
to avoid re-setting those
repeatedly on every session.
Tell Git who you are:
$ git config --global user.name "$DEBFULLNAME"
$ git config --global user.email "$DEBEMAIL"
Git uses this data as commit author identity and stores it to your
~/.gitconfig
.
Create working directory:
$ mkdir -p ~/devel
$ cd devel
All build products will be placed in this directory.
Clone the Vanilla kernel repository:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
$ cd linux
The default remote is always named origin:
$ git remote
origin
However, we are going to have multiple remotes and multiple upstreams (origins), so let’s give a better name for the default remote:
$ git remote rename origin torvalds
Let’s add couple of remotes more, more specifically Ubuntu Quantal’s repository for packaging and configuration stuff and OverlayFS’s repository for, well, OverlayFS:
$ git remote add ubuntu-quantal git://kernel.ubuntu.com/ubuntu/ubuntu-quantal.git
$ git remote add overlayfs git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs.git
And then, let’s fetch the data from those remotes:
$ git fetch ubuntu-quantal
$ git fetch overlayfs
Now we all set for the next step, patching the kernel.
There are several different ways to apply patches to current Git
branches, i.e. patch && git commit
, git am
, git cherry-pick
and
git rebase
. Each of these have their own uses:
the good old patch
when you don’t have an option,
git am
when you are applying a series of patches formatted with
git format-patch
and probably received via email,
git cherry-pick
when you want to pick a commit from some other
tree,
and finally git rebase
when you want to take a series of commits
and place them on top of another branch (git rebase
satisifies
many other use cases, it’s a true Swiss Army Knife inside Swiss Army
Knife, yo dawg).
We are trying to build v3.6 kernel with support for
OverlayFS. OverlayFS has not been merged to the mainline (Torvalds
tree) yet, it’s developed in a separate repository based on Torvalds’s
tree. The goal is to create a tree, which is based on v3.6 and has the
latest version (v15) of OverlayFS on top of it. Luckily, git rebase
makes it ridiculously easy. Let’s first create and checkout a new
local branch based on the latest OverlayFS version:
$ git checkout -b v3.6+overlayfs overlayfs/overlayfs.v15
Then rebase the current branch on top of v3.6:
$ git rebase v3.6
Ubuntu kernel release branches are based on stable kernel releases. In addition to stable version, Ubuntu has different types of changes on top of that:
Ubuntu-specific kernel source modifications: changes in the kernel
tree which are not in upstream. Commit messages prefixed with
UBUNTU: SAUCE:
.
Ubuntu-specific kernel configuration modifications: changes only
in configuration files. Commit messages prefixed with UBUNTU:
[Config]
.
Debian packaging changes: changes in debian/
and debian.master/
directories. Commit messages prefixed with UBUNTU:
.
Patches (cherries) picked from upstream: these are not prefixed, but most of them (all?) have a line mentioning where the cherry was picked from and an URL pointing to the corresponding bug report in Launchpad.
We rebased OverlayFS on top of the latest Torvald’s release (v3.6) and
not on top of an older Ubuntu Quantal. However, Ubuntu Quantal’s tree
has all the Debian packaging infrastructre we need. We could have
tried to rebase Ubuntu Quantal’s tree onto v3.6, but it would probably
resulted in lot of conflicts, because, as said earlier, Ubuntu kernel
trees are based on stable releases, but they have also lot of other
changes on top of that. So we decided to do it differently: let’s take
only the necessary bits from Quantal’s tree and apply them on top of
our v3.6+overlayfs
-branch. In this case, the necessary bits are only
couple of trivial patches (cherries) to tools/hv
, which makes the
whole thing packaging-friendly:
17c9fa8 UBUNTU: tools/hv: add basic manual pages
ecb998c UBUNTU: tools/hv: add basic Makefile
Cherries are easy to pick:
$ git cherry-pick -x 5635e65ba0f3f508cdaf290adb42b8d2364c000f
$ git cherry-pick -x efea9d22efa602b01d712698f886300a597e2102
That’s it, they should’ve applied cleanly. We are ready for the last step.
In Ubuntu kernel build system, the packaging infrastructure lives
inside debian/
and debian.master/
directories. Let’s checkout them
to our tree as they are in the latest commit in
ubuntu-quantal/master
:
$ git checkout ubuntu-quantal/master -- debian
$ git checkout ubuntu-quantal/master -- debian.master
The Ubuntu kernel build system tracks ABI changes which consists of tracking exported symbols and modules. The Ubuntu build system enforces also certain configuration otions. Here we are simplifying things a bit and just remove all checks with following simple script:
for i in debian/scripts/*-check debian.master/scripts/*-check
do
if [ -f "$i" ]
then
cat - <<EOF >"$i"
#!/bin/sh
exit 0
EOF
chmod 755 "$i"
fi
done
One more thing to do to satisfy the build system. Let’s just create symbol and module lists:
$ cp debian.master/abi/3.5.0-17.26/i386/generic debian.master/abi/3.5.0-17.26/i386/fatclient
$ git add debian.master/abi/3.5.0-17.26/i386/fatclient
$ cp debian.master/abi/3.5.0-17.26/i386/generic.modules debian.master/abi/3.5.0-17.26/i386/fatclient.modules
$ git add debian.master/abi/3.5.0-17.26/i386/fatclient.modules
Here, we have used 3.5.0-17.26 as the base version, but your version might differ. Change it if necessary.
Then, let’s create a new configuration flavour for LTSP client image:
$ cp /boot/config-$(uname -r) debian.master/config/i386/config.flavour.fatclient
$ fakeroot debian/rules clean defaultconfigs
$ git add debian.master/config/i386/config.flavour.fatclient
We need to make a couple of changes. Modify
debian.master/etc/getabis
to have the following change:
--- debian.master/etc/getabis
+++ debian.master/etc/getabis
@@ -11,7 +11,7 @@
package_prefixes linux-image linux-image-extra
getall armel omap
getall armhf omap highbank
getall amd64 generic
-getall i386 generic
+getall i386 generic fatclient
# Ports arches and flavours.
getall powerpc powerpc-smp powerpc64-smp
And debian.master/rules.d/i386.mk
according to the following change:
--- debian.master/rules.d/i386.mk
+++ debian.master/rules.d/i386.mk
@@ -2,7 +2,7 @@
human_arch = 32 bit x86
build_arch = i386
header_arch = x86_64
defconfig = defconfig
-flavours = generic
+flavours = generic fatclient
build_image = bzImage
kernel_file = arch/$(build_arch)/boot/bzImage
install_file = vmlinuz
We need to create one more file which defines couple of variables for
the build system. Let’s use debian.master/control.d/vars.generic
as
a template:
$ cp debian.master/control.d/vars.generic debian.master/control.d/vars.fatclient
And then modify it for our needs:
--- debian.master/control.d/vars.generic
+++ debian.master/control.d/vars.fatclient
@@ -1,6 +1,6 @@
arch="i386 amd64"
-supported="Generic"
-target="Geared toward desktop and server systems."
+supported="Fatclient"
+target="Geared toward LTSP client systems."
desc="=HUMAN= SMP"
bootloader="grub-pc | grub-efi-amd64 | grub-efi-ia32 | grub | lilo (>= 19.1)"
provides="kvm-api-4, redhat-cluster-modules, ivtv-modules, ndiswrapper-modules-1.9"
Then, feel free to modify the kernel configuration further:
$ debian/rules editconfigs
Finally, when everything is ready, add new change log entry:
$ dch -v 3.6-999.$(date +%s) --distribution quantal --package linux -c debian.master/changelog Vanilla kernel v3.6
And commit all changes to Git:
$ git commit --amend -a -m "OPINSYS: [Config] New configuration flavour for i386 fatclients"
We are using same kind of message prefixing scheme as Ubuntu does, with the minor execption that UBUNTU is substituted with OPINSYS.
Once the everything is configured, the actual compilation and packaging phase is really simple. We just clean the repository, checkout a new working branch because the build process makes some changes to tracked files and then just make all necessary targets:
$ git clean -fdx
$ git checkout -b kernel-build
$ fakeroot debian/rules clean
$ fakeroot debian/rules binary-indep
$ fakeroot debian/rules binary-perarch
$ fakeroot debian/rules binary-fatclient
It’s evident that packaging custom kernels is not the most simplest thing to do. However, many of the steps covered in this document need to be executed only once and many more could be automated. For occasional packaging, the ad-hoc process described above might be good enough, but if custom kernel packaging becomes a common practice, custom packaging infrastructure come into a question. For starters, see Linaro’s packaging spells.
Happy packaging!
Before going to into ldapjs I will review our current architecture and will point out the pain points with it. We store almost all of our data in OpenLDAP, but most importantly, user data, on which I will focus on now.
We have two availability requirements for the user data. It must be editable through some interface from anywhere and always readable from the school network. Editability is required, because there are various web services that use this data for login and users must be able to change their emails and passwords for those from home or any other place.
Availability in school network is crucial for desktop computer logins. Some schools have very spotty Internet connectivity and it is unacceptable to have broken logins when the Internet connection is down.
We have all the user data stored by a organisation in its own subtree in our OpenLDAP master server which is located in a proper data center in central Finland. From there each organisation subtree is replicated to corresponding OpenLDAP slave server which is in the school’s network.
Technically this works just fine, but for for us developers working with OpenLDAP to create new applications is far from ideal.
When you setup a Linux machine to authenticate against an OpenLDAP server you have to give access rights for every user directly to it. Since we have other application data in LDAP as well we have to be very careful when setting up ACL rules so that users cannot read or write any data that does not belong to them.
This leads to something I call “LDAP driven development” where every data access is validated by a OpenLDAP ACL rule. Whether it is desktop login or a OAuth login to some web service. Setting up these rules is no fun. They are not exactly designed for large scale application development.
For introduction I suggest that you read the ldapjs post from the Node.js blog. To summarize ldapjs is a framework for building LDAP servers as Express or Sinatra are frameworks for building HTTP servers. It is also similarly very light weight. It does not come with a database or a replication story. It just gives you the tools for interacting with LDAP clients.
I had two weeks to play with this and I wanted to see if I can meet our requirements with it. For database I went with CouchDB for its praised replication features which seems to be a good fit for our uses.
I wanted to design the architecture with minimal interactions with LDAP. Everything should be done with CouchDB and some custom REST APIs. LDAP side of things should be just a CouchDB adapter for those services that can only interact with LDAP servers such as Linux client logins.
My target client system was a Ubuntu Precise Pangolin desktop with
libpam-ldapd
, libnss-ldapd
and nslcd
packages to provide LDAP
integration.
Programming with ldapjs is done by writing handlers for LDAP methods, such as
bind
, search
and modify
and by the base the operation should be executed
against. For details checkout the guide on
ldapjs.org.
To provide LDAP server for the Ubuntu client to authenticate against I had to
only care about bind
and search
methods. On a search handler you get a
request object with filter
attribute which contains a parsed LDAP search
filter with a matching function which can be used to test if some Javascript
object matches the filter.
With that it is easy to go through list of objects and send matching ones to the client.
1 2 3 4 5 6 7 8 |
|
But that does not actually scale if you have thousands of users in the database. It’s not reasonable to fetch them all one by one and test for a match. Ideally I should should transform the filter to a query understood by my database, but with CouchDB this is not reasonable, because the search cababilities are not as expressive as the LDAP search filters can be.
Because this time I was targeting only a one LDAP client I was able to take a shortcut and just detect the exact queries required for nslcd to work. I fired up a dummy ldapjs server which printed out every search filter that was made on it. There weren’t that many:
Then I created few functions with js-schema to detect those cases. This of course results in a broken LDAP server implementation if used with any other client, but as CouchDB LDAP adapter for these Ubuntu clients this is just fine.
Initially I wanted to design the server without any root accounts. On login only the credentials of the user logging in would be used for LDAP bind, but it appears that it not possible, because the user listing must be readable by nslcd before user even enter his password.
There is a way to workaround this. Our current production setup with OpenLDAP uses the credentials to retrieve a Kerberos ticket from our Kerberos servers before making any connections to the OpenLDAP server and only then uses the Kerberos ticket to bind with OpenLDAP. Sadly this is where ldapjs falls short. It does not support the SASL bind mechanism which is required to authenticate with Kerberos tickets. I opened a Github issue about it.
I’d love hear if there a way to configure logins without Kerberos to not require user data until the password is supplied.
Working with ldapjs has been a quite nice experience. It does keep the promise of bringing Web framework style development to LDAP. Although the clients are not that nice as web browsers are. For example nslcd does does over 200 queries to the LDAP server if used without a caching daemon when logging in a single user. Better keep caches warm when handling large schools.
The code I’ve written while experimenting with ldapjs can be found from our Github account:
https://github.com/opinsys/couchldap
Feel free to poke it if CoffeeScript does not upset you too much. For more complete implementation checkout ldapjs-riak from the creators of ldapjs.
]]>School technology is a tough job - specialised tools, lots of demanding users, little money and usually also politics are involved in some way. I'm working at Opinsys developing new technology that best addresses the needs of schools. We are working with kindergardens and schools up to high schools (K12). Our aim is to make school technology usage as seamless as possible so that teachers can concentrate on pedagogy and help pupils learn better instead of spending their time fighting with unusable technology.
Opinsys is now managing some 160 schools that have some 6,000 computers and 40,000 users. We provide full service so that the school gets everything for a fixed fee - servers, maintenance, training, helpdesk etc. Currently we are working in Finland as that's what we know the best, but we are happy to also get in touch with schools elsewhere.
We base our solutions on Ubuntu and LTSP. Most of the computers are thin and fat clients, but also laptops and some Windows machines and Macs are also in use. Most of the schools have their own server that is connected to our centralised Puavo management tool. We are using Puavo to manage the school information, users and devices. Puavo makes it possible for teachers to do themselves many tasks like user management easily through a web browser. Puavo connects to web tools so that deploying e.g. LMSs is simple and the desktop user accounts work also in web based services.
Using Ubuntu and LTSP has significant benefits from many other alternatives.
In a typical setup every classroom has usually a teacher desktop computer that acts as a hub connecting other classroom technology. The pupil computers are usually smaller thin clients to conserve space in classrooms.
A common setup for has pieces like:
It is becoming more and more common that classrooms have also a few fixed desktops for pupils and laptops that can be used around the school. Tablets are also being deployed in many places, so the classroom wifi access points are being used more and more. Some schools allow also pupils to use their own mobile devices through the wireless networks.
In addition to providing infrastructure and desktops, we are also developing new tools to help collaboration within the classroom. Currently we have two prototypes that are freely available:
We are also developing Iivari, a digital signage system for schools. It provides a simple, but effective platform to broadcast information around schools.
If you want to try out the tools or follow the development, check our GitHub profile.
]]>The Finnish school system has been a lot in the media around the world lately. Finnish pupils have been scoring at the top in the international PISA tests. Because of that educators outside Finland have been interested in knowing how this has been achieved. I could not claim to know much about the foundations of the system myself when I got to experience this interest personally earlier this year while travelling in some far-away places. When people learnt that I work with schools for living, they were immediately asking how Finland is achieving these results. I've been surrounded by the Finnish education system all my life, so for me there's nothing special. I did spend a year in the US as an exchange student during high school, though, and only now I'm starting to understand what I experienced there. Pasi Sahlberg has written a book called Finnish Lessons that tells more about the Finnish system if you are interested in it. I personally found the book great.
My background is not in pedagogy, but technology. I have been working on school ICT for 7 years now through Opinsys, so I relate my findings to that. As a part of my job I've been testing a lot of software designed for schools and sometimes I've been left wondering why they have some really weird features. I have many times felt that there is misalignment between the needs and features. After getting to know more about the differences in school systems around the world, I'm starting to understand how the Finnish system reflects to ICT requirements.
First with the basics:
When I was on my exchange year in the US, I couldn't stop wondering why there were tests all the time and why anyone would be interested in knowing their ranking within their class. I was used to having one exam per course and sometimes no tests at all. And the tests in the US were multiple choice exams instead of open questions. When I realised that most of the world has quite different system for their schools, I started to understand why some of the tools really make no sense in Finland. Like some of the Moodle modules. E.g. why have multiple exam modules for multiple choice exams when there are no multiple choice exams at all in use? (Optical scanner manufacturers have been left out in the cold also.) Moodle is still being used a lot, but blogs and other online tools are also spreading. When browsing online for student management tools, one easily finds features like "fee management". When schools are free and everyone gets also free lunches, there's no need to bring a lunch box or buy lunch coupons. And because there are no other payments to schools either, there's no need to manage payments in student information systems. So a feature like that doesn't make much sense either. The government doesn't need tools to rank schools either.
There are still tools needed and in Finland almost all schools are using StarSoft's Primus and Wilma as their student management software. They have built their tool to match the Finnish system and laws and schools use them to communicate with the parents online. There's also a newcomer in the field - Helmi, but its marketshare is still small. In a saturated market like this tools like SchoolTool have little chance of success when features don't match the need.
In Finland there are many critics today who say that technology is not used to its full potential in Finnish schools. Some blame teachers, some blame cities and some the government. When talking about technology, one can always say that you don't have enough money or other resources. Especially in rural areas it can be that there's nobody who would know how the systems work and how to make the most out of the available resources.
Because all schools are public and run by cities, they cannot demand that every student needs to buy their own laptop or tablet or something else. Schools need to provide all the technology needed. Some schools allow pupils to bring their own devices to school, but this is still a new thing. But if there's no stable wireless network, having your own wireless device doesn't help much. Some are hoping to integrate personal devices better to supplement school technology in the future. Internet connection filtering is not usually done at all.
As Finland is a large country in land area, but relatively few people, there are a lot of small schools in remote areas with poor internet connections. 1Mbps internet connection should be available everywhere as a human right, but for schools that is not enough. 100Mbps or gigabit internet connections are getting more common in the city schools. Overall the amount of money available for technology purchases and training varies by the city, so not all pupils have same possibilities regarding technology use. Teachers have autonomy in their classes and are free to experiment, so some are really creative and getting impressive results even when others are struggling. Technology that doesn't help the teacher and the pupils, doesn't get used, because the teachers will not use it.
There are many models for school ICT management. In some places it's the math teacher who takes care of everything for no pay, in some places city ICT center has dedicated support people for schools and most are somewhere in between. When the municipalities build the school infrastructure, it is often linked in some way to other ICT systems run by the municipalities, such as local government officials' ICT systems, public health care centers, libraries, etc. E.g. it can be that schools share fibre optic cabling with health care and daycare centers in their networking cabinets in the basements.
Innovative educators can get project money to try out new ideas. Schools can apply for money grants for technology projects from the ministry of education that has usually focus on a few areas every year. At one point they gave money for school networks. Then for many years the focus was on getting more computers in schools. Now they are sponsoring projects developing learning environments. There are quite a few projects every year with different focus. This gives the people possibilities to try new approaches and if the results are good, others can benefit from the work too. It's amazing what a single school can achieve when motivated and creative minds realise their dreams.
This post is only scratching the surface of the topic, but I hope to write more about the topic shortly. There's also more information available on what Opinsys does.
]]>Pahvi walks the same path with Walma as being a tool for enabling student collaboration using computers. The idea in Pahvi comes from collage style group assignments where the students cut and paste pieces from magazines etc. to form a collage. These collages are usually build on cardboard paper. Cardboard paper is "pahvi" in Finnish, so therefor the name Pahvi.
Pahvi tries to bring this kind of work to the Internet age. Instead of cutting and pasting images from paper magazines students can use any freely available content* from the Internet.
As in Walma, collaborative working is an important aspect of Pahvi. Students are able to edit the same Pahvi from multiple computers and the changes are synchronized automatically between the browsers.
The idea of Pahvi was originally conceived in Janne Saarela's bachelor thesis and we were given a two week sprint to develop a prototype of it, but Pahvi got surprising amount of traction during the Educa fair and we were given an extra week to develop it. Now after three weeks of hacking Pahvi it is still far from the final product, but it has been already used by various schools in Finland!
Feel free to try it here:
Please send us any feedback to dev@opinsys.fi. Future of this product is based on your feedback! You can also post issues directly to our Github issues list.
The technology stack is fairly similar to Walma. It is running on top of Node.js and client-side code is organized by using Backbone.js. An important part of Pahvi stack is Zoomooz.js library by Janne Aukia. It enables all the cool zooming action in the presentation mode. So in a sense Pahvi is a wysiwyg editor for Zoomooz!
For synchronization we decided to give ShareJS a try. It was a very pleasant experience and we ended up creating a nice Backbone.js adapter for it. We decoupled it from Pahvi and released it under more permissive MIT license. For more experiences about it you can read from this separate blog post.
Pahvi itself is licensed under GPLv2+ and is hosted on Github.
]]>We decided to give ShareJS a try. It's developed by Joseph Gentle who worked on Google Wave. It's an awesome library that allows developers to build small Etherpad clones with just few lines of JavaScript code. ShareJS has a great support for JSON documents which is pretty nice fit for Backbone Models.
Backbone.SharedCollection is collection that listens to all changes events of the models within the collection and saves the changes to a ShareJS document. It also listens events coming from ShareJS and reflects them to models. Neat thing about this is that no special code is required for the models. They just automatically get synced between all browser instances when they are added to the collection.
This commit shows the few required changes to make the TODOs example app in Backbone.js collaborative. Not too bad. The full example is in the Github repository.
ShareJS also supports Redis as persistence option. So this also gives persistence for Backbone Models for free which is great when building prototypes. Just build client only prototype with Backbone and when you need synchronization and persistence you can just drop Backbone.SharedCollection in and you are set.
For more information and API docs checkout README.md in Github.
ShareJS supports text operatios within string fields in JSON documents. It would be cool to build Backbone Model where a field could act as storage for mini etherpads.
In Pahvi we have editable Text Boxes and currently the text and text style attributes are stored as attributes in a Backbone Model. For style attributes this works great since it's ok to allow latest update to override previous, but for text this is problematic. Users will lose data when editing the same Text Box concurrently. It would be ideal if we could keep this Model abstraction over the Text Box data and allow Etherpad style editing to it.
Ideas and comments about this would be very welcome.
Rich text support is also something were are hoping to see in ShareJS soon :)
Esa-Matti Suuronen
This prototype is all about bringing more social and collaborative aspects to school's IT-stack. The schools that have acquired tablets, usually already have interactive whiteboards in class rooms. Those are mainly teacher's tools and tablets are clearly geared for students. So with Walma we have created a unified drawing board for both devices. Teacher may ask a student to join in the drawing using his/her tablet. The drawing is automatically synchronized between the devices in real-time. The synchronization works via public Internet so the student don't even have to be in the same class room. He/she can be attending the class using a video link and with Walma he/she can take part in teaching.
You can test out the prototype here:
Be aware that this is a very early prototype and has some obvious features missing. The idea is not be a complete solution yet, but to awake ideas within teachers and other education professionals. You can checkout our issue list on Github. Please report if you find some unlisted issues.
The prototype requires a quite modern browser. On desktops latest Mozilla Firefox and Google Chrome is known to work fine and on tablets Firefox Beta on Android and Safari on iPad.
As we have very small development team we cannot develop on each individual platform natively. Not only we have two major tablet platforms, Google's Android and Apple's iOS, but also the desktop platforms. We deliver our solution on top of Linux, but the number of personal devices among students is increasing so it would be very shortsighted to ignore those. So that means at least five different platforms. That's quite a task even for a large team.
So we obviously went the web route. HTML5 Canvas and WebSockets allow one to build quite easily a collaborative whiteboard application. Web app also gives huge advantages when it comes to inviting others to join in the drawing. Teacher can just send a link of the drawing to the student and all the student has to do is open that link. It works no matter what the student's device is. Installation step in here would be a deal breaker for usability.
The application itself is written in CoffeeScript and is running on Node.js. If you are more interested in the code of the application you can check it out on Github. All code is released under GPLv2+. So feel free to host it your self and hack on it. All feature requests, bug reports and especially patches are very welcome.
In theory all browsers supports the requirements for our app. Those are HTML5 Canvas for drawing and WebSockets for real-time synchronization. Canvas is supported on every browser on every platform and most of those support WebSockets as well.
Lacking support in WebSockets is not a big deal since Socket.IO library in Node.js can cleanly fallback to long-polling and other hacks when required. We had more issues with Canvas even though it should be more mature.
We had the latest and greatest Android tablets as test devices, Samsung Galaxy Tab 10.1 and Asus Transformer. Both shipping with fast dual core processors and Android 3.x with advertised hardware acceleration support in their WebKit based browsers.
But to our surprise the Canvas performance in Android Browser was just terrible. When one draws a line on the device it gets drawn on remote devices with proper Canvas implementations before it manages to draw it locally! It's totally unusable in drawing applications. This is really weird since my personal phone, HTC Desire Z, which is much older device and ships with older version of Android has no issues with Canvas performance. We tested this with several other Canvas applications as well, so it's not just our app. There is something really wrong in the Canvas implementation. One can slightly speed up the drawing by disabling hardware acceleration from the browser, but it's still way too slow for drawing.
Luckily Google is not the only one shipping browsers for Android. Mozilla is making a great effort with it's Android port of Firefox. It has a reasonable Canvas performance and it even supports WebSockets. So it's pretty good for our app.
On the other hand Safari on Apple's iPad, which also is a WebKit based browser, happens to have an incredibly good Canvas performance. It is actually on the par with native drawing applications. This something I really would expect from Google. Since I see Google as the web company today and they are really committed in making Chrome the best browser on desktops, but in tablets they have the worst. In many ways the Android browser feels like Internet Explorer 6. We haven't had a chance to try the Android 4.0 browser yet, so it may bring improvements.
Tablet browser issues aside, Walma is quite usable as it is. We are planning to integrate Walma with our desktop solution. Users will be able to take a screenshot and open it in Walma with a single click. This can be used to elaborate other teaching material in a way that students can participate too.
]]>The tablet is a window to the web and android being a google-thing it's easily assumed that doing stuff in the browser is easy. Because more and more of the stuff that needs doing is done online one could also assume that the tablet is a good tool: light, great batterylife, always on, always online (when at all possible), easy to use in a variety of situations. Well, well, not quite.
How wonderful it is to be able to break false assumptions, especially ones own.
When i started testing my greatest concern was about writing using the touch pad. But that is not a big problem. Already it's working pretty good on the touchscreen. The problem is simply that web-sites have not been designed with tablets in mind. Since our customers are schools I picked a few of the services that I know is being used here in Finland: moodle, wilma (a tool for teachers, parents and students alike), wikispaces, prezi and GoogleDocs. Let’s see.
First Moodle. Ok, so Moodle is customizable and no one moodle is exactly like any other. The ones I have seen though, all have an abundance of small icons and small text-input-fields. Unless you constantly zoom in and out it’s hard to navigate. Maybe needless to say, zooming should be an option, not a mandatory thing to do for simple input and navigation. A big screen with a normal mouse and keyboard feels like heaven after a few minutes.
Wilma is a very popular intraweb-type service for schools that include schedules, e-mail, e-portfolios, results from exams etc. It is used daily by teachers and students. I found that it works ok with the tablet, although far from really convenient. The problem is the same as in moodle, but to a lesser degree, simply because there’s less stuff crammed into one page.
I have become a great fan on wikispaces. It’s easy to learn, both for teachers and students, and wikispaces makes a point out off giving educators “plus” -versions for free. I’ve been training teachers in it’s use and it’s only natural to test out how it works. Apart from the one time it completely crashed the browser not much needs to be said. To read others wikis goes smoothly, to edit is far from convenient, but possible. It’s a bit hard to say how much the usability is about habit.
Prezi is a nice cloud-service that lets you do presentations online in a novel way. I like to recommend it because it solves all the compatibility issues between different office -suites in one go. You do it and store in online and access it anywhere. Unless, you´re using an android tablet, that is. It either jams up working very slowly or crashes completely. The most recent I found on this is here.
Finally, GoogleDocs. The logic of presumtion here goes like this: android = google and google = GoogleDocs, thus: Android and GoogleDocs = friends. But that turns out to be false, because GoogleDocs is not designed for tablets in the first place. It’s certainly possible to write with Docs on the tab (in desktop -mode it crashed though) but you completely lack the editor menu. So text goes in, yes, but you only have the keyboards functions. There is a GoogleDocs app available, but it is apparently designed with phones in mind, not as full-fledged tablet app. I soon found out that using Evernote (a note-taking app that works offline, available from android market) and copy-pasting into a GoogleDoc for later editing on a computer works best for me. But that is hardly something I could recommend in general.
So, it seems to me that everything needs it's own native android application or / and websites need to have tablet -versions. As it is now, I would not recommend changing from a laptop to a tablet if a great part of the users work is done online. Since tablets are getting more and more popular, this will probably change. For an interesting discussion on native vs. web apps have a look at this.
Time to look at what these things CAN do for teaching.
-Kim Wikström
]]>When one is asked to evaluate a new thing, be it what it may, the natural tendency is to compare it with something known. The known options are the laptop and desktop. But these tools on the one hand and tablets on the other are designed for different purposes, right? If I would ask a carpenter to compare the usability of a big saw designed for sawing planks and a small hand jigsaw he would probably say the obvious: "It depends on what you want to do". When people compare, some believe that a tab is no-go in school because compared to devices with keyboards, its hard to use it for writing and you write a lot in school. Let's see. Have a look at the tube below from 4:00 onwards.
It's an Ipad, but the same points concern our android friends. Honestly, I don't think anybody knows if we will have to carry external keyboards with us forever, but somehow it seems unlikely. One thing to remember is that people who have used normal keyboards for 15+ years have a different approach than young pupils in school. It is possible that students will adopt the touchscreen and it's future versions for writing without any great loss of speed or convenience. My personal experience however is the same as our Pad Lady in the video: it's ok for short stuff, but not nice at all if you need to write long things. So: You do need a external keyboard, if a tablet is your only mobile device and you write a lot AND you don't want to suffer for X amount of time.
One very important point is illustrated in this video, and that is the support that keeps the tablet upright. During my first days of tabbing I got a serious neck ache from looking downwards. I like the little tripod that she uses, but I suppose there will appear many options as time goes by, like this one for example.
If you google "writing with pad / tablet" most of the stuff that comes up has to do with the Ipad. And a great part of that stuff about writing-apps for Ipad. There are roughly two ways to go from here. One is to search for a good "office" -like application for android that can work offline, and the other is to go with GoogleDocs. To make it short, I haven't found anything good for the offline -option and the mobile version on GoogleDocs don't really satisfy either. I'm going to keep testing and searching, but for the time being writing, say, an essay would be a painful experience. I hope this is temporary and that I can return to this blog after some days and say I have the solution. We'll see.
So let's compare a little. According to what we saw on the video, writing on a tablet is not necessarily a problem, since some choose it freely. The point is to have an external keyboard at hand when you need it. For the time being the lack on android is in the applications. I hope that we will see some seriously good web -based writing tools with offline capabilities soon and being the optimist I am, I think we will.
]]>We are going to have a closer look at these and other points now. For the coming months, I've commited myself to a little pad/tablet research job with the aim to gain some understanding about what makes these things wonderful, useless or, most likely, something more relative to the envisioned contexts of use. I will not be testing an Ipad, but pads running android 3.0.1. Why? Good question, let's just say it's much more interesting for us to see what an open source solution can bring to this. Here is what I'll try to figure out:
I added few configuration options to this plugin:
We also needed support for multiple search urls when we use it on the Puavo. My changes to this plugin are available on Github. Test and use it and will tell me how it works!
1 2 3 4 5 6 7 8 9 10 11 12 |
|
1 2 3 4 5 |
|
SMART Technologies has been longer with linux than the other manufacturers and the software quality is noticeably better and it is more polished than others. Many of the earlier problems and missing features have been fixed with the newest update in March. There are now also deb packages available for Debian/Ubuntu that were missing before. The software is available both in Finnish and Swedish.
Software installation package includes device drivers and Notebook software. There were no other steps needed after installing to package to get it working in test systems. Smart software the only one of the tested that didn't have issues when installing on normal workstation or in LTSP chroot image. Others have a lot to learn in this respect.
Calibration is activated either from menu or through icon in the panel. In dualhead configuration it worked nicely and the correct display is selected by pressing spacebar.
Using Smartboard is straightforward. The whiteboard resembles physically a ink drawing board and it comes with four pens with different colours. The software picks the colour by sensing which one of the pens is picked up. There is also a physical eraser. Beginners should have no trouble getting used to using different pens and to clean the board with the eraser. Smartboard software has also a nice feature when a pen is picked up when Notebook software is not running - the software takes a snapshot of the desktop and allows one to write directly on it. When pen is placed back, normal desktop resumes.
Smartboard's latency is not the best of all tested boards and the cursor on the screen lags a bit behind the pen. When drawing it doesn't loose track, so even if it takes half a second or a second to catch up, the drawing follows the pen or finger. Hardware resource usage was not a problem, but using flash objects and video files there was a slowdown.
Only technical problems with the Smartboard software were quite relaxed file permissions. They can cause problems in multiuser environments where users can edit files they shouldn't. File permissions problems were quite minimal to some of the other packages, though. There are more details in the end of the article.
There were some random problems during testing, but we couldn't reproduce them. At least once the whiteboard stopped responding - touch wasn't recognised, calibration didn't activate and onscreen keyboard couldn't be opened. After rebooting the machine everything worked normally again, so the real cause of the problem remains a mystery.
The last version of Notebook software has support for handwriting recognition. Feature is still incomplete and many Finnish language words weren't recognised correctly. There's no Finnish selectable in the configuration, so we had to use German, which may have caused some of the problems. There's also an annoying bug that places the last letter of the word on a different line. Promethean's handwriting recognition did work better.
- Handwriting recognition has problems
- Latency is sometimes annoyingly large
Here are some of problems we noticed in the Smartboard software:
-Antti Sokero (Technical work by Juha Erkkilä ja Veli-Matti Lintu)
]]>Promethean's pen resembles a normal ballpoint pen that has a single button attached. When the pen is moved within a couple of millimeters from the whiteboard, it is possible to move mouse cursor without pressing the left button. Other tested models didn't have this feature. Left mouse button is pressed when the pen touches the board. Tested model had two identical pens with it. In theory there is support for multitouch, but the tested whiteboard would need an update for it to work. So whether it works with linux couldn't be tested this time.
We had tested Promethean some time ago and linux support has improved noticeably. Software packages for Ubuntu are distributed through an apt repository which makes installation and updates easy. Other vendors don't have their software available this way. For normal users this makes installation painfree. Software was translated in both Finnish and Swedish.
After installation Promethean ActivInspire works nicely and calibration can be started by pressing Promethean's logo on the whiteboard with the pen. This makes it easy to use the whiteboard without having to use mouse or keyboard. Also dualhead support works and calibration is easy to use in dualhead setup.
Latency is sometimes noticeable, but overall speed is good. The most annoying feature is the random cuts in touch recognition. Solid lines look often dashed lines if one doesn't really concentrate on pressing the pen properly. This might not be a big issue when one gets used to the whiteboard and pen. If one presses the pen wrongly it creates annoying sounds. We couldn't figure out if the sensitivity can be adjusted.
Even if installation is painfree in single computer installation, centrally managed environments cause various problems. Promethean requires a custom kernel module to work. When installed in LTSP fat client chroot, the running kernel is different from the kernel used in the fat clients. This makes the kernel modules installation to fail.
Other technical problems are file permissions and security problems caused by too relaxed permissions. There are more details in the end of the article. We have reported the finding to the developers, but we haven't yet heard back from them.
+ Pen can be used to move mouse cursor without simulating left mouse button at the same time
+ Packaging and apt repository
+ Easy to do tasks needing good precision
+ Dualhead support works nicely
- Requires a closed source kernel module to work
- Pen touch recognition cuts off randomly
Here are technical problems found in the Promethean software:
-Antti Sokero (Technical work by Juha Erkkilä and Veli-Matti Lintu)
]]>MimioStudio and its drivers are needed on linux for MimioTeach to work. MimioStudio was the only software package that was not translated to Finnish.
There were quite problems with the MimioStudio installation package. The biggest issue is that the installer assumes too much about the environment which causes installation to fail in many cases. E.g. installation results are different if one starts the installation by clicking an icon on desktop vs. running the same file from command line. There are more details in the end of this article. The test results and problems have been reported to the developers also.
After getting the software installed MimioTeach works nicely. We mainly used it on LTSP fat clients that run all software locally on the client instead of using the server. On the first run Mimio was connected to normal locally installed workstation and the software made an automatic firmware updade after connecting the device with an usb cable. Update was automatic and worked without problems. Also the wireless receiver worked.
Using normal linux desktop with Mimio's pen was easy and fun. Touching the surface with pen equals to pressing left mouse button. The pen has also two programmable buttons. Touch sensitivity and overall feeling were among the best in the test. There's practically no latency which creates an illusion of using a real ink pen.
Dualhead support didn't work as expected as calibration didn't work if the screens had different resolutions. When both displays had the same resolution, also calibration worked. Also cloned displays worked.
Overall impression of MimioStudio from normal user's point of view was quite polished and stable. The software activates itself when Mimio hardware is connected. Alternatively product key can be entered when the program starts. In some cases registration didn't always work correctly and the licensing information wasn't saved. There's no support for license management in LTSP fat client environment.
MimioStudio lacks in features compared to Smart Notebook or Promethean ActivInspire (e.g. handwriting and shape recognition is not supported), but works well in other uses. When using normal linux desktop Mimio takes the lead because of its accuracy, low latency and programmable buttons.
- Ubuntu/debian packaging is incomplete and unstable. In some cases the software ends up being unusable after installation.
- MimioStudio doesn't support Finnish language
Here's a list of issues regarding MimioStudio's linux compatibility from technical point-of-view.
-Antti Sokero (Technical work by Juha Erkkilä and Veli-Matti Lintu)
]]>Feature | Cleverboard3 | Cleverboard Dual | Mimio | Smart Board | Promethean |
Hardware status / problem debugging from software | No user friendly way. Possible to see from command line if pen movements are received in software. | Not possible. | Yes. Control panel has a list of connected devices. Wireless dongle is listed as a separate device. | Yes. Hardware information can be seen in control panel. | Yes. Hardware information can be seen in control panel. |
Firmware updates on linux | No information | No information | Yes. Automatic when usb cable is connected. | Yes. Asks for update in notification bubble. Update was not carried through while testing. | No information |
Linux driver implementation | No custom module. | No custom module. | No custom module. | No custom module. | Requires a custom kernel module. In some cases installation can cause problems. |
Dualhead support | No | No | Yes, but partially broken. No clear requirements, in test setup both displays had to have same resolution for software to work. | Yes. Whiteboard display is selected with space key on keyboard. | Yes. Whiteboard display is selected by clicking mouse on the right display. |
Multitouch support | No | No (only win 7) | No | Not with tested hardware. Recognises multiple fingers and averages them, no random jumping. | Didn't work with tested hardware. ActivInspire seems to have support, but the test whiteboard didn't have pens for testing. According to the vendor also a firmware update is needed. |
Software resource usage | No problems | No problems | No problems. Notebook software took at most 60% of cpu time. | No problems. When adding multiple images and video latency grows. | No problems. |
Calibration | Calibration is started from Lynx software menu. Pen doesn't work if Lynx software is not running. | Calibration is started from panel's notification area. | Really easy. Calibration is started by pressing a special physical key on the whiteboard. | Calibration is started from Smartboard icon in panel's notification area. | Really easy. Upper left corner of the whiteboard has a physical button that starts the calibration. Also panel icon can be used to start the calibration. |
Response to pen movement | Movement lags behind noticeably. | Movement lags behind slighty. | Excellent. Immediate response even with sudden movement. | Follows 0.5-1.0 seconds behind. Notebook software works better than linux desktop. Accuracy is good and the movement follows the pen exactly even when it lags behind. | Good. Movement lags behing only slightly. |
Touch sensitivity | Good. Touch is recognized from the pen and sensitivy is good. | Unreliable. Misclicks are common and drawings are often clumsy. Usage needs some practice. | Good. Touch is recognized well. Quiet buzz from the pen gives additional feedback. | Good. Default settings work well. | Needs concentration. When drawing for longer periods touch is lost randomly and the drawing has "holes". Clicks sensitivy is good. |
Finger touch recognition | No, only pen. | Yes, meant to be used with finger, but works with any pointing device. | No, only pen. | Yes, works well. | No, only pen. |
Touch accuracy | Poor. Calibration doesn't work well with current software. | Average. Drawing a single dot can be challenging. | Good. Accuracy is worse near the receiver, but otherwise really good. Receiver positioning needs some thought. | Good. Some parts of the whiteboard recognize touch a few millimeters off. | Good. Within a few millimeters. |
Pen buttons | Pen has multiple buttons, but no clear was available whether they are programmable. | No pen. Right mouse button can be emulated by keeping finger steady for 2.5 seconds. | Two programmable buttons. E.g. unix copy-paste with middle button is usable. | No buttons in pen, right mouse button can be used by pressing a physical button on the whiteboard. | One button that acts as right mouse button. No information whether that is programmable. |
Eraser | No physical | No physical | No physical | Yes. Physical eraser looking plastic piece. | No physical. |
Usability of linux desktop and normal linux programs | Poor. Inaccuracy of calibration makes desktop usage unusable. | Average. Usable after getting used to it. Not good for things needing good accuracy. | Very good. Two programmable buttons enable e.g. unix copy-paste. Accuracy and feeling good. | Good. Getting used to using right mouse button takes some time, but can be done front of the whiteboard. | Pretty good. Pen sensitivy and random loss of touch make it a bit annoying. Right mouse button in the pen. Onnistuu kohtalaisen hyvin. |
Moving mouse cursor without pressing the left mouse button | No | No | No | No | Yes, keeping the pen within a couple of millimeters from the whiteboard equals moving the mouse. Touch the whiteboard equals pressing left button. |
License management in LTSP environment | No documentation | No documentation | No documentation | No documentation | No documentation |
Lynx 4.0 (Cleverboard) | MimioStudio | Smart Notebook | Promethean – ActivInspire | |
Version | 4.3.21 | 7.11 | 10.2 | 1.5.34144-2 |
Website | Lynx 4 FAQ | Mimio Studio Linux | Smart Notebook | Promethean |
Package size | Core: 152 MB All materials: 824 MB Cleverboard Dual with MultiTouch driver 4.63 MB | English only: 142 MB All languages: 369 MB | Not checked | 141 MB (driver, material, software, etc.) |
Packaging format | bin-package | deb-package | deb-package | deb-package |
Licenses (still need more information) | Closed source. Single user license can be installed on two computers, on workstation and laptop. Site license is available, but details are still fuzzy. | Closed source. No restrictions on number of computers / users. Software is actived by connecting Mimio hardware or by entering product key. | Closed source. | Closed source. |
Supported linux distributions | Linux x86 2.6.27 and greater | Fedora 13, 14 openSUSE 11.3 Ubuntu 10.04, 10.10 | Debian / Ubuntu RPM | Ubuntu 9.10 Ubuntu 10.04 Debian Lenny Debian Squeeze Linkat 3 Linex Colegios 2010 ALTLinux |
Language support | Englisth, Swedish, Finnish | Englisth, Swedish | Englisth, Swedish, Finnish | Englisth, Swedish, Finnish |
Information available about differences between different OS versions | No | No | Information sheet. No information available whether document is up-to-date. | No |
Support for embedding images | Yes | Yes | Yes | Yes |
Support for embedding audio files | Yes | Yes | ||
Support for embedding video files | Doesn't work | Yes | Yes | Yes |
Support for embedding flash files | Yes | Yes | ||
Curtain | Yes | Yes (doesn't work with metacity compositing) | Yes | Yes |
Spotlight | Yes | Yes (doesn't work with metacity compositing) | Yes | Yes |
Handwriting recognition | No | No | Yes, but imcomplete. German must be selected as recognition language. Last letter is often placed on separate line which is quite annoying. | Yes, but bad whiteboard sensitity causes problems. Words are often recognised incorrectly. Scandinavian letters work ok. |
Shape recognition | Yes. A bit slow, though. | Yes, either already drawn objects or while drawing. | Yes, works while drawing. | |
Drawing on top of desktop | Using a screenshot | Yes, takes a screenshot and turns it into a drawing layer. | Yes, activated by lifting a pen from the whiteboard | Requires metacity compositing support in gconf. |
onscreen keyboard | Yes | Yes. | Yes. Physical whiteboard has a special button to activate. | Yes |