Jul 30, 2006

Epiphany just got a whole bunch better

I just stumbled across the unoficial epiphany extensions site. I would recommend all users of epiphany install at least the;

  • Undo close tab extension

  • Only one close button extension. This also auto-resizes the tab width!

  • Middle click tab close extension. I have eliminated the scroll wheel from my life to combat RSI so with this (and the push scroll extension) I can operate epiphany using the middle mouse button.

See how attractive epiphany looks now?

Epiphany hotness

Jun 1, 2007

Evolution and Conduit Action

The reaction to my previous post was extremely positive, with a lot of interest in the bindings. Ive had a bit more free time and done some more hacking on both the bindings, and integrating evolution support into Conduit. On the first point;

  • evolution-python-0.0.1 is available (download) (API docs) (example)

    • Implements add(), remove(), delete() and update() for ECalComponents into ECal sources

    • Implements convenience setters/getters for ECalComponent summary and descriptions (so you don't need to see the underlying ical types)

    • In short you should now be able to build note taking, todo applications, and addressbook managers on top of evolution data-server using python.

Work on the 0.3.1 bugfix release of Conduit is coming along nicely, it should be out sometime over the weekend. In between fixing bugs however I been able to introduce some new dataproviders, which in turn will open up a heap of new synchronization possibilities.

  • Thanks to new conduit contributor Thomas Van Machelen we now support Smugmug and Picasaweb photo sharing sites.

  • Support for Facebook photos (uploading yours, and downloading your friends) is nearing completion thats to the excellent API complete pyfacebook bindings.

  • By supporting Evolution Memos and TODO items we now support some pretty cool sync combinations;

    • Evolution Memos <--> Backpackit.com and Tomboy Notes

    • Evolution Memos and Contacts <--> iPod

    • gnome-about-me (evolution) <--> Facebook profile

I recognise not all of these sync pathways are immediately useful or new, but they are providing a good test of the framework, checking if the design is correct and providing useful test cases. Its also exciting to think that when network support lands next cycle, direct sync between two computers over the web/LAN will just work, and will support any dataprovider that is available to conduit.

Sep 3, 2007

Expanding the Sync Space

No screenshots this time, but still good news. Conduit tries to be a good citizen, integrating itself with GNOME technologies. Recently the following hacking has been going on (not all items are complete)

  • A Eye Of Gnome plugin allowing photo upload/sync to Flickr, Picasa, SmugMug, Facebook (soon)

  • Gnome user documentation

  • Themable icons

  • Patches to improve FSpots DBus interface to allow two way syncs with photo sites

  • Continued work on OpenSync integration. We have limited support for arbitary opensync plugins!

  • Canola, n800, Rhythmbox and general music synchronization

There have also been some new contributions

  • Two way support for Flickr, Picasa and Smugmug. Now you can move your photos between sites (AKA move my data)

  • A youtube dataprovider (sync your fave videos each morning)

  • Google calendar support (sync evolution with google calendar, iPod, etc)

Anyway if you are looking to get involved with Conduit now is a great time. I have just updated the docs on how to write a dataprovider. Here are some ideas for dataproviders people may wish to write (hint hint)

There will be some new features landing in trunk soon that will allow arbitrary parameters during the conversion process, i.e. transcoding audio/video to another format, resizing photos, compressing files, etc. Watch this space, download conduit, and go tell people it rocks!

May 7, 2007

Finally A Release

Finally, and ?only? 3 months overdue I have released Conduit 0.3.0. This release finally marks the end of the sync engine rewrite from the previous release. This also signifies the first time that conduit is simultaneously useful to end users, and application developers as a desktop sync service.

[Screenshot-Conduit-0-3-0-1.png Screenshot-Conduit-0-3-0-2.png](http://www.conduit-project.org/wiki/Screenshots)

What can it do?

From an end user perspective Conduit has reached the level of being useful. I am currently travelling around Europe for a few months and using Conduit on a daily basis, at least for the task of Synchronization/backup of my photos to Flickr and my home server, Sync/Export of my Tomboy notes to iPod. Other than that Conduit can currently perform the following sync partnerships;

  • Two way file/folder sync on gnomevfs volumes

  • Two way Tomboy note sync via gnomevfs volumes

  • Two way Tomboy note sync via ipod notes

  • One way sync of files/folders of photos to Flickr

  • One way sync of FSpot tagged photos to Flickr

In the one way sync/export case Conduit is smart in the sense that if a piece of data has not been modified then it will not be synchronized/exported again, it will replace the existing data.

I have also added conflict detection, and a UI for resolving conflicts, including the ability to compare the conflicting data (using gnome-open on the relevant URI).

Desktop Sync Service

I have been talking about the merits of Sync as a desktop sync service for a while, and this release finally marks the point where I can start to add export and sync capabilities to GNOME apps using conduit. In this situation Conduit is exclusively controlled via DBus (independent of the UI). By sharing the same DB we can avoid duplicate data in the synchronization process.

To demonstrate this I modified to Uploadr to call conduit over DBus (and called the resulting app Yaput - yet another photo upload tool) (screenshot). This means that duplicate photos will not be uploaded, updated photos will replace old ones, and it doesn't matter whether you upload the photos from the Conduit GUI, the DBus interface or $(YOUR_APP_HERE). All that using 10 DBus calls!

Anyway, that's the plan, and apart from a few annoying known issues, I finally feel like I am on track.

May 27, 2014

FlyMAD - The Fly Mind Altering Device

Today I'm proud to announce the availability of all source code, and the advanced online publication of our paper

Bath DE*, Stowers JR*, Hörmann D, Poehlmann A, Dickson BJ, Straw AD (* equal contribution) (2014) FlyMAD: Rapid thermogenetic control of neuronal activity in freely-walking Drosophila. Nature Methods. doi 10.1038/nmeth.2973

FlyMAD (Fly Mind Altering Device) is a system for targeting freely walking flies (Drosophila) with lasers. This allows rapid thermo- and opto- genetic manipulation of the fly nervous system in order to study neuronal function.

|filename|images/strawlab/flymad_intro_sml.png

The scientific aspects of the publication are better summarised on nature.com, here, on our laboratory website, or in the video at the bottom of this post.

Briefly however; if one wishes to link function to specific neurons one could conceive of two broad approaches. First, observe the firing of the neurons in real time using fluorescence or other microscopy techniques. Second, use genetic techniques to engineer organisms with light or temperature sensitive proteins bound to specific neuronal classes such that by the application of heat or light, activity in those neurons can be modulated.

Our system takes the second approach; our innovation being that by using real time computer vision and control techniques we are able to track freely walking Drosophila and apply precise (sub 0.2mm) opto- or thermogenetic stimulation to study the role of specific neurons in a diverse array of behaviours.

This blog post will cover a few of the technical and architectural decisions I made in the creation of the system. Perhaps it is easiest to start with a screenshot and schematic of the system in operation

|filename|images/strawlab/flymad_screenshot_sml.png

Here one can see two windows showing images from the two tracking cameras, associated image processing configuration parameters (and their results, at 120fps). In the center at the bottom is visible the ROS based experimental control UI. Schematically, the two cameras and lasers are arranged like the following

|filename|images/strawlab/render2_sml.png

In this image you can also see the thorlabs 2D galvanometers (top left), and the dichroic mirror which allows aligning the camera and laser on the same optical axis.

By pointing the laser at flies freely walking in the arena below, one can subsequently deliver heat or light to specific body regions.

General Architecture

The system consists of hardware and software elements. A small microcontroller and digital to analogue converter generate analog control signals to point the 2D galvanometers and to control laser power. The device communicates with the host PC over a serial link. There are two cameras in the system; a wide camera for fly position tracking, and a second high magnification camera for targeting specific regions of the fly. This second camera is aligned with the laser beam, and its view can be pointed anywhere in the arena by the galvanometers.

The software is conceptually three parts; image processing code, tracking and targeting code, and experimental logic. All software elements communicate using robot operating system (ROS) interprocess communication layer. The great majority of code is written in python.

|filename|images/strawlab/path8510_sml.png

Robot Operating System (ROS)

ROS is a framework traditionally used for building complex robotic systems. In particular it has a relatively good performance and simple, strongly typed, inter-process-communication framework and serialization format.

Through its (pure) python interface one can build a complex system of multiple processes who communicate (primarily) by publishing and subscribing to message "topics". An example of the ROS processes running during a FlyMAD experiment can be seen below.

|filename|images/strawlab/s3_crop_sml.png

The lines connecting the nodes represent the flow of information across the network, and all messages can be simultaneously recorded (see /recorder) for analysis later. Furthermore, the isolation of the individual processes improves robustness and defers some of the responsibility for realtime performance from myself / Python, to the Kernel and to my overall architecture.

For more details on ROS and on why I believe it is a good tool for creating reliable reproducible science, see my previous post, my Scipy2013 video and presentation

Image Processing

There are two image processing tasks in the system. Both are implemented as FView plugins and communicate with the rest of the system using ROS.

Firstly, the position of the fly (flies) in the arena, as seen by the wide camera, must be determined. Here, a simple threshold approach is used to find candidate points and image moments around those points are used to find the center and slope of the fly body. A lookup table is used to point the galvanometers in an open-loop fashion approximately at the fly.

With the fly now located in the field of view of the high magnification camera a second real time control loop is initiated. Here, the fly body or head is detected, and a closed loop PID controller finely adjusts the galvanometer position to achieve maximum targeting accuracy. The accuracy of this through the mirror (TTM) system asymptotically approaches 200μm and at 50 msec from onset the accuracy of head detection is 400 ± 200 μm. From onset of TTM mode, considering other latencies in the system (gigabit ethernet, 5 ms, USB delay, 4 ms, galvanometer response time, 7 ms, image processing 8ms, and image acquisition time, 5-13 ms) total 32 ms, this shows the real time targeting stabilises after 2-3 frames and comfortably operates at better than 120 frames per second.

|filename|images/strawlab/s1_crop_sml.png

To reliably track freely walking flies, the head and body step image processing operations must take less than 8ms. Somewhat frustratingly, a traditional template matching strategy worked best. On the binarized, filtered image, the largest contour is detected (c, red). Using an ellipse fit to the contour points (c,green), the contour is rotated into an upright orientation (d). A template of the fly (e) is compared with the fly in both orientations and the best match is taken.

I mention the template strategy as being disappointing only because I spent considerable time evaluating newer, shinier, feature based approaches and could not achieve the closed loop performance I needed. While the newer descriptors, BRISK, FREAK, ORB were faster than the previous class, nether (in total) were significantly more reliable considering changes in illumination than SURF - which could not meet the <8ms deadline reliably. I also spent considerable time testing edge based (binary) descriptors such as edgelets, or edge based (gradient) approaches such as dominant orientation templates or gradient response maps. The most promising of this class was local shape context descriptors, but I also could not get the runtime below 8ms. Furthermore, one advantage of the contour based template matching strategy I implemented, was that graceful degradation was possible - should a template match not be found (which occurred in <1% of frames), an estimate of the centre of mass of the fly was still present, which still allowed degraded targeting performance. No such graceful fallback was possible using feature correspondence based strategies.

There are two implementations of the template match operation - GPU and CPU based. The CPU matcher uses the python OpenCV bindings (and numpy in places), the GPU matcher uses cython to wrap a small c++ library that does the same thing using OpenCV 2.4 Cuda GPU support (which is not otherwise accessible from python). Intelligently, the python OpenCV bindings use numpy arrays to store image data, so passing data from Python to native code is trivial and efficient.

I also gave a presentation comparing different strategies of interfacing python with native code. The provided source code includes examples using python/ctypes/cython/numpy and permutations thereof.

The GPU code-path is only necessary / beneficial for very large templates and higher resolution cameras (as used by our collaborator) and in general the CPU implementation is used.

Experimental Control GUI

To make FlyMAD easier to manage and use for biologists I wrote a small GUI using Gtk (PyGObject), and my ROS utility GUI library rosgobject.

|filename|images/strawlab/gflymad_sml.png

On the left you can see buttons for launching individual ROS nodes. On the right are widgets for adjusting the image processing and control parameters (these widgets display and set ROS parameters). At the bottom are realtime statistics showing the TTM image processing performance (as published to ROS topics).

Like good ROS practice, once reliable values are found for all adjustable parameters they can be recorded in a roslaunch file allowing the whole system to be started with known configuration from a single command.

Manual Scoring of Videos

For certain experiments (such as courtship) videos recorded during the experiment must be watched and behaviours must be manually annotated. To my surprise, no tools exist to make this relatively common behavioural neuroscience task any easier (and easier matters; it is not uncommon to score 10s to 100s of hours of videos).

During every experiment, RAW uncompressed videos from both cameras are written to disk (uncompressed videos are chosen for performance reasons, because SSDs are cheap, and because each frame can be precisely timestamped). Additionally, rosbag files record the complete state of the experiment at every instant in time (as described by all messages passing between ROS nodes). After each experiment finishes, the uncompressed videos from each camera are composited together, along with metadata such as the frame timestamp, and a h264 encoded mp4 video is created for scoring.

After completing a full day of experiments one can then score / annotate videos in bulk. The scorer is written in Python, uses Gtk+ and PyGObject for the UI, and vlc.py for decoding the video (I chose vlc due to the lack of working gstreamer PyGObject support on Ubuntu 12.04)

|filename|images/strawlab/scorer_sml.png

In addition to allowing play, pause and single frame scrubbing through the video, pressing any of qw,as,zx,cv pairs of keys indicates that a a behaviour has started or finished. At this instant the current video frame is extracted from the video, and optical-character-recognition is performed on the top left region of the frame in order to extract the timestamp. When the video is finished, a pandas dataframe is created which contains all original experimental rosbag data, and the manually annotated behaviour against on a common timebase.

Distributing complex experimental software

The system was not only run by myself, but by collaborators, and we hope in future, by others too. To make this possible we generate a single file self installing executable using makeself, and we only officially support one distribution - Ubuntu 12.04 LTS and x86_64.

The makeself installer performs the following steps

  1. Adds our Debian repository to the system
  2. Adds the official ROS Debian repository to the system
  3. Adds our custom ROS stacks (FlyMAD from tarball and rosgobject from git) to the ROS environment
  4. Calls rosmake flymad to install all system dependencies and build and non-binary ROS packages.
  5. Creates a FlyMAD desktop file to start the software easily

We also include a version check utility in the FlyMAD GUI which notifies the user when a newer version of the software is available.

The Results

Using FlyMAD and the architecture I have described above we created a novel system to perform temporally and spatially precise opto and thermogenetic activation of freely moving drosophila. To validate the system we showed distinct timing relationships for two neuronal cell types previously linked to courtship song, and demonstrated compatibility of the system to visual behaviour experiments.

Practically we were able to develop and simultaneously operate this complex real-time assay in two countries. The system was conceived and built in approximately one year using Python. FlyMAD utilises many best-in-class libraries and frameworks in order to meet the demanding real time requirements (OpenCV, numpy, ROS).

We are proud to make the entire system available to the Drosophila community under an open source license, and we look forward to its adoption by our peers.

For those still reading, I encourage you to view the supplementary video below, where its operation can be seen.

Comments, suggestions or corrections can be emailed to me or left on Google Plus

May 21, 2008

Frantic

Frantic would be how I described my last two weeks. I have had very little free time to work on Conduit. Everything seems to have come at once!

Random

  • I have been playing with barpanel, a very functional GNOME panel replacement.

  • Grape is certainly an interesting UI/desktop mock up. If I had infinite spare time I migh have a hack on it, as an excuse to play with Clutter.

  • Props to Jan Bodnar for his excellent Gtk+ and Cairo tutorials.

  • My (bad) experiences with Ubuntu 8.04 can be best described by the following picture.. Firefox crashing

Openstreetmap GPS Mapping Widget

Somewhat tangentially related to my PhD, I have been hacking on a simple Openstreetmap GPS mapping/display widget. Basically because after investigating all the existing mapping programs on linux, none of them supported openstreetmap/openaerialmap and were able to be easily embedded.

OSM GPS map widget

It's basically a port of tangoGPS (by Marcus Bauer) to libsoup and considerable clean-up. The whole thing is now hidden behind a derived GtkDrawingArea with a nice simple 4 function api (other parameters such as zoom, lat, lon, are accessible as gobject properties)

  1. set_map_center(double lat, double lon, int zoom)

  2. add_gps_point(double lat, double lon)

  3. add_roi(double lat, double lon, GdkPixbuf *pixbuf)

  4. get_bouding_box()

Things like double click, map dragging, scroll to zoom, etc are all handled automatically as you would expect. It caches downloaded tiles and it's pretty much complete at this point. I hope to be able to post code soon.

Feb 17, 2006

Gnome 2.13.91 jhbuild Fun

Well I spent most of the afternoon trying to get Gnome 2.13.91 (2.14 Beta 2) to build. The first problem was with libxklavier. The version in jhbuild fails to build, I filed this bug but in the meantime replacing with libxklavier2.1 fixes it.

The only other problem is with epiphany which fails to build. It cannot find iso-codes. I actually think the problem is with iso-codes because running in jhbuild shell, it cannot be found. Maybe iso-codes make install is broken.

Anyway tomorrow I will have a go at building jhbuild in chroot (im trying to get a working GnomeLiveCD from using jhbuild). More on that later

Feb 20, 2006

Gnome Documentation

Well by virtue of having a working jhbuild install of Gnome 2.13.91 I have volunteered myself to update the nautilus section of the gnome user docs. Well what a Job that is shaping up to be!

I have been rewriting a lot of the docs and taking a heap of (matching) screen shots along the way so when I finish (which will be in a day or two) the new user guide will be rocking!

Mar 2, 2006

Gnome Documentation System

Just started hacking on my attempt to implement some sort of more modern documentation system for the gnome user guide.

So far this involves a few components; a modified version of moinmoin 1.5, a hacked version of yelp with xml-rpc support, and some python scripts.

Basically the idea is that moinmoin now supports docbook generation and has a nice xml-rpc interface. So the idea is that periodically the wiki version of the gnome-user-guide is crawled and docbook is generated. Also, yelp can fetch the latest version of the documentation from the wiki and display it.

A bit of work to go but a nice project covering lots of disiplines

Apr 26, 2006

GNOME on the horizon

My responses to this well composed post of GNOME constructive criticism.

Interesting and well written post. I am also not a gnome expert, only a user and part time developer. Here are some suggestions regarding each of the points you made;

Firstly a few general comments. Personally on Ubuntu Dapper Beta, Gnome is considerably faster than KDE in my experience, but this may just be a variation of the placebo effect!. Either way all the recent GNOME performance work has sped it up considerably since the version in Breezy!

Secondly I think a number of your points make the assumption that GNOME = Nautilus, or GNOME = Nautilus + Metacity. This is a common belief (or if indeed this is what you though) and incorrect. HOWEVER, this is why we must admit that proportionally a lot of polish must go into these two applications. I dont think that throwing features at Nautilus is the answer, it will also never be the GNOME answer, and I am thankful for it. But you do raise some good points, and ignoring any GNOME user is the wrong thing to do.

Now, point by point responses

Instant Messaging: Check out the galago desktop presence framework. While still on the horizon I forsee some form of integration between the entire desktop, gaim, and this as the glue

Voice Over IP: Check out ekiga. Maybe at some stage this may make it into core GNOME. Either way there is possiblity for integration with above.

File Sharing: I could not agree more. Apparently nautilus (gnome-vfs) supports rendevous discovery of shares and the like but I dont think this is fully taken advantage of (or I havent seen it in GNOME 2.14). Perhaps something could be done with cherokee web server + WebDAV + automatic nautilus discovery + gnome-vfs WebDAV + places sidebar. Edit- This was done here, but i think the project died.

Secure Shell GUI: This is already there. Check out Places -> Connect To Server -> SSH. I presume you can also enter sftp://user@server/path

More Command Line Tools: While I understand the problem you want to solve; easier access to command line. I dont think shoving everything into the file browser is the right way to go about it. If GNOME can think of a better way to interact with the command line in a modern GUI environment then this is a step in the right direct. Putting the terminal into nautilus I dont think so much.

Menu Editing: Ubuntu Dapper ships the alacarte menu editor. It has also been proposed for inclusion into GNOME 2.16. Its a step in the right direction.

Eye Candy: Do NOT open this can of worms. Personally I think the GNOME has KDE beat when it comes to eye candy. The eyecandy discussion will never be resolved. Its all a matter of preference, I like toyota, you like mazda etc. At the framework GNOME can draw nice graphics using Cairo, and fun things are happening FOR EVERYONE IN THE LINUX COMMUNITY, GNOME AND KDE using compiz and XGL or AIGLX. You do have some points, there are features coming into GTK to be able to detect if a compositing manager is running, but if I want GNOME to keep looking prettier than KDE (joke) then stuff needs to go into GTK to make this easier!

Conclusion I think that this is a very interesting time for GNOME. There is all this cool framework level stuff sitting there, simmering away, ready to be cooked into GNOME in some amazing ways. leaftag, galago, deskbar (and the buzz its generating), desktop search, performance improvements, compiz and XGL, and MUCH MORE.

Comments welcome

← Previous Next → Page 4 of 10