Monthly Archives: March 2009

LTSP past and future

For those of you not yet familiar with LTSP, it’s the Linux Terminal Server Project which goal is to transform a regular workstation into a terminal server that can be used by thin clients. Thin clients are either old computers recycled as thin clients or specialized minimal computers (usually disk less and without moving parts) that are used to boot off the network.

Thin clients evolving

Until now, LTSP was mainly used to do something of these good old P2s unused in the back of the computer lab but things are gently starting to change. Even if you can still use it with old computers running everything on the server and so not using much CPU on the thin clients we’re now seeing way more powerful thin clients appearing (usually Atom-based) where it’d be a waste just to use them as regular all-server-side thin clients.

1520-PXE from Diskless Workstation

Localapps are finally there

Starting with Jaunty’s LTSP one now has the possibility to choose which application will run on the server and which will run locally.

For these of you not living all day in LTSP’s world, our issue was that these thin clients just weren’t using their CPU, everything running on the server. In order to decrease the load on the servers and use the thin clients a bit more, we got the idea of running some of the softwares locally, showing them just like regular application (you usually can’t tell which one are remote and which one are local). They can access the same files and settings as their remote equivalent could, making them from a user point of view almost identical to traditional remote applications (just a bit faster).

This is achieved using LTSP’s localapps and a bit of XDG magic. Basically you can now install firefox in your LTSP chroot, set LOCAL_APPS_MENU to True in your lts.conf and here you go with your usual firefox running locally on your thin client. The XDG magic takes care of adding the application in the menu if this one isn’t installed on the server and if it’s already installed on the server, will tweak the launchers to start the localapp.
As a result you’ll see a decreased CPU usage on the server and also spare a lot of bandwidth as you’ll be accessing the content directly and rendering locally instead of getting the X11 stream directly.

New X and multi-head support

X configuration was also improved a lot, in most case you won’t need to do anything as most common thin clients are already known and fixed and everything else will rely on X’s auto-detection.

You can also play with X RandR extension now and try dual or tri head setups with you thin clients just by playing with XRANDR_MODE_X and XRANDR_OUTPUT_X (X starting at 0 for the first defined screen), it’ll automatically generate a Virtual setting if required by your driver so you can then dual-head.
I’m currently using with my laptop as a thin client hooked up into a 1920×1080 screen using HDMI, its own screen at 1680×1050 and another external screen at 1280×1024 using VGA, all that with LTSP.

And even compiz !!!

For these of you who like eye candies, compiz work perfectly with LDM_DIRECTX set to True (so X11 doesn’t get encapsulated in SSH), then just run “env SKIP_CHECKS=yes compiz –replace” in a shell and you’ll get compiz (or set it in as autostarted application).
Warning: Using SKIP_CHECKS will start compiz without doing any checks, this is needed as the checks won’t work with LTSP but you’ll need to make sure your video card supports compiz without doing it or you may crash your X server.

Clustering for large networks

Since I’ve been at Revolution Linux I’ve been working on LTSP-Cluster which is a set of component on top of LTSP to make it load-balanced and easily manageable on very large networks.
Jaunty is the first release with LTSP-cluster integrated on the thin client, so if you’re managing a large centralized LTSP setup, you may want to have a look at our wiki.

Its core components are:
The Control Center, a web interface (PHP) used to access your network logs, thin clients hardware, organize your thin clients in a tree and set attributes (equivalent of lts.conf) based on the MAC address, position in the tree or even on the hardware of the thin client. Due to some design issue and difficulty to maintain, it’s being rewritten but will at least at the beginning keep the same database to make migration easier.

The Loadbalancer, made of two components, an agent to run on your servers and a server, the server will gather the various information from the servers and return the best server every-time it gets a request from the Control Center.

The Account Manager, a python service to run on the server that’ll create new accounts on the fly for autologin users and will also manage regular accounts, doing the cleanup and ensuring you aren’t logged in twice on the network.

We also have a few more components to do NX integration of the load-balancer and generation of pxelinux’s configuration files. The howto for a generic setup in OpenVZ is present on our wiki though anyone interested in improving the documentation is greatly welcome (just contact me and I’ll answer your questions and give your write access to the wiki so you can contribute).

The additional packages you need for ltsp-cluster services aren’t yet in Ubuntu, so you’ll need to use the PPA to install the loadbalancer server/agent and the control center. Code is available on Launchpad:

Trying it out

Starting with Hardy, LTSP is now part of the Ubuntu Alternate CD-Rom.
It can be installed by selecting the “LTSP server” option from the cdrom boot menu, installation is easier if you have two network cards, one for internet and the other for your thin client LAN.
Complete instructions can be found here: (also valid for Jaunty)


If you’re interested by LTSP, want to try it out or just get more information, these are useful resources:
Ubuntu LTSP help:
Edubuntu handbook for LTSP:
LTSP’s website:
IRC: #ltsp (please mention what distribution and version you’re using) or #edubuntu (LTSP used to be part of Edubuntu)

Posted in Planet Ubuntu | 9 Comments

Announcing Mirrorkit – frontend to debmirror and possibly other mirroring softwares

After deploying it at a couple of customers at Revolution Linux I thought it’d be a good idea to finally release it publicly.

So here it’s, Mirrorkit now has it’s Launchpad project setup and Michael Jeanson is now working on getting it packaged for Karmic.

So what’s it exactly ? Mirrorkit is a simple frontend to debmirror with a user-friendly (I hope) configuration file helping you create your own Ubuntu/Debian mirror.

It’s a python script that basically takes an xml file as input describing both general and per-mirror configuration including proxy, help pages for the user, log output as html files and user-readable (I hope) definition of the mirrors.
When run (usually from a cron job) it’ll parse the .xml file and generate the debmirror (or in the future other mirroring softwares) command line, run it, parse the output and generate a report as an html page.

Today, I’ve released version 0.1 that can be downloaded here:
An example of its output can be found here:
And its commented configuration file here:

I’ve been running it for over a year now in my previous school and it’s working like a charm (thanks to debmirror mainly).

Suggestions and bugs can be reported on Launchpad, here:
Code (GPLv2+) can be found here:

Posted in mirrorkit, Planet Ubuntu | 2 Comments