Limited resources, in this millennium?

Thursday, 30. Aug 2007

[ The full blogging experience, this time drunken as a brick layer.. :-]

Today I want to excrete my approach to the tacky problem of what should be done about reaching the limits of limited resources. Please excuse my grammar etc since my IQ has temprorarily been reduced below that of aspell and I am not writing this with Emacs which reduces my degrees of mental freedom even more.

Users of the full Desktop experience that are running Gnome, KDE, Gnustep, or even fvwm on a modern piece of PC hardware don’t really have to worry about running out of resources. They configure their system to be adequate for their needs and generally accept the fact that running out of resources, while painful, is a problem that they have to solve by upgrading their system or by adjusting their choice of software to be more suitable. Thus is the life of the savvy computerer.

Running desktop software on mobile consumer devices, on the other hand, poses the immediate challenge of how to cope with limited resources in the hand of users that shouldn’t care less about the amount of RAM they have. Today, it is accepted that owners of an iPod make an informed decision of how much storage that glorified mp3-decoder-plus-amplifier has, but actually, Apple knows the customer and is marketing the things in t-shirt sizes: hundreds of songs, thousands of songs, millions of songs,

The point being: people understand that devices have a limited amount of resources and that more is most of the time better, but how much they really need in hard MiBs is quitet the mystery. But still, running out of resources is not a surprise. Everybody is used to running out of monay after all.

When it happens, tho, the experience of actually running out of space should be so pleasant that you go and buy the next bigger model instead of giving up on the whole fad of digital entertainment entirely.

In my opinion, Unix has done quite well with coping with limited resources, AS LONG AS there is a knowledgable system administrator available that can migrate home directories to a bigger disk array, etc.

But what about devices like the Nokia N800 that attempt to put half of Gnome in the hands of Joe Couchpotatoe, without a sysadmin to be found?

When Joe runs out of resources, he is not going to check /var/log/messages and order a bigger flash chip from Samsung and solder it in. Unless he is convinced that he just requested something impossible from his device, he is going to catapult his device into a corner and write an angry blog post about it. (Oh, how I miss the days when posting something to the whole world required more skill than piggybacking some memory…)

So, finally, what can be done about running out of resources? The following is what I have written into a bug report about GConf behaving erratically when there is no storage for it it save its database. I would like the maemo architecture to evolve along these ideas…

Operations, regardless of whether they consume memory, storage, or CPU time, can be broadly classified into “small” ones and “large” ones. Small operations are expected to consume a small amount of resources and not having this small amount available is considered to be a pre-condition violation. Large operations are the ones that are expected to consume huge amounts of resources which might or might not be available. What happens when you run out of resources is part of the defined behavior of the operation and will be handled in the normal course of action.

Examples for small operations are allocating memory for GObjects, writing your PID into /var/run/, and computing the length of a string. If there are not enough resources available for these operations, drastic things are allowed to happen, like the process aborting, or not being responsive for so long that the user is tempted to kill it.

Examples for large operations are generally associated with external data: allocating memory for a user-supplied image, downloading a file to local storage, or applying a filter to a image. If these operations run out of resources, they should handle it gracefully with proper progress bars, clear indications to the user and options for the user to repair the situation and try again.

The system as a whole is responsible for dealing with small operations running out of memory. When memory gets tight, the system should warn the user about it and offer some ways out of it. Starting to swap, background killing, popping up a dialog that lets the user close some applications, etc.

The same should apply to running out of storage for small operations: the system should warn the user when storage gets dangerously full and offer some cleanup options (like moving user documents from internal flash to a memory card).

We are quite some way away from making this work perfectly, and applications don’t give indications whether they are doing a “small” operation or a “large” one. I.e., the memory allocator in libc should know whether it has to try its damned best to satisfy the request, or whether it might fail it so that other more important requests might be fulfilled later.

Likewise, there should be reserved storage for small operations. (The distinction is not between “root” and “user”, with the implication that root operations are somehow more important than user operations.) Large operations should not be allowed to consume the space reserved for small operations. When that reserved storage starts to be being consumed by small operations, the system should go into “the ship is sinking” mode and urge the user to make space.

Now, is writing the GConf database a small operation or a large one? In my opinion it is small. Size is not really the main deciding factor (and so “small” and “large” might not be the best terms for the two classes of operations). Writing the GConf database is not a operation that the user initiates or is aware of. GConf must handle it completely transparently and must thus rely on the system to get it out of a tight spot. In the rare case that writing the database actually fails, the user should have been amply warned about the system becoming instable etc, that it is acceptable to loose the
unsaved user settings.

Our system doesn’t do anything right now to make small storage operations more robust, but I don’t think we should put any workarounds into GConf for this. When writing the cache fails, it must happen in a controlled and sensible way, of course: the old database that has been stored must not be lost and GConf should continue working.

About these ads

One Response to “Limited resources, in this millennium?”

  1. Eero Tamminen said

    There could be a separate partition for configuration settings. Unfortunately JFFS2 doesn’t handle small, frequently written partitions well at all. Hopefully there will be better Flash file systems in the future…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: