Monday, February 21, 2005

Running User Mode Linux from 2.6.10 Kernel

I don't really know anything about mucking around with the Linux kernel (I'm the type of guy whose favourite text editor is pico and who can't figure out how to use Debian meaning he has to use RedHat instead), but somehow I signed up to do a term project involving a bit of kernel hacking, so I guess I have to learn. Since kernel stuff seems a bit messy, I opted to work with User Mode Linux (UML).

Now, when one browses around through the main website, there's a lot of stuff about ancient versions of UML based on 2.4 kernels, there's a couple of mentions of 2.6 kernels, and tucked away somewhere, there's a brief aside that UML has been integrated into Linux kernels 2.6.9 and beyond. Seeing as everything else on the main website seemed a little dated, I thought that it would be best to use the UML that's integrated into the main Linux branch because it would be more likely to be maintained and tested there. In actuality, the UML code that's been integrated into the kernel doesn't work all that well, and it took me a few days of searching through various websites, wikis, forum discussions, etc. to figure that out. So here are the goods on how I got UML working on my Fedora Core 3 system.

Apparently, the UML in the current 2.6.10 vanilla kernel doesn't work. Instead, grab the 2.6.9 vanilla kernel from kernel.org or somewhere. Then, go to Paolo Giarrusso's UML site and grab the latest 2.6.9 patch (it's in his archives in the guest patches section). After you've unzipped and patched everything, simply taking the default configuration from

make menuconfig ARCH=um

isn't sufficient. One has to enable these options in the kernel

Character Devices/File Description Channel Support
Character Devices/Port Channel Support
Character Devices/tty Channel Support
Character Devices/xterm Channel Support
Block Devices/Virtual Block Device
File Systems/Pseudo File Systems/dev File System Support

Then, you can follow the normal instructions about compiling, stashing a root_fs file system somewhere, and then using the devfs=(no)mount option appropriately.

This is enough to get UML to bring up a prompt, but that's as deep as I've dug so far. I'm still weighing whether it would have been easier simply to have started with a Debian package of UML.

Tuesday, February 08, 2005

CorelDraw 11 Import/Export Woes

Being from Ottawa, I'm a CorelDraw user (Corel is an Ottawa-based company). I'm not a very good CorelDraw user (I just use it to draw simple diagrams), but I'm still a CorelDraw user. Just like other CorelDraw users, I have to live with both the good and the bad features of CorelDraw. It comes with a lot of features and applications for a good price, but you have to live with the ever expanding resource usage of each version, buggy features, an under-designed user interface, and a general lack of polish. Perhaps things will get better now that they're under new management.

I bought a discount version of CorelDraw 11 a few months ago, and I use it for mostly computer science type things: presentations and LaTeX diagrams. In order to use CorelDraw in this sort of configuration, you need to do a lot of bulk importing and exporting. I just thought I would blog my approach to making CorelDraw 11 do what I want in case the one other person who uses CorelDraw in this sort of configuration is interested :-).

For example, I usually draw all my LaTeX figures in one CorelDraw document and then use a bulk export macro to export each page as a different eps file that I can reference in LaTeX. CorelDraw 11 doesn't let you configure eps export options from a macro, so you have to export one eps file by hand first and configure the export options there, and then when you run the macro, it will just reuse the export options that you set before.

Also, pdflatex prefers to import vector graphics in the pdf format. Unfortunately, CorelDraw 11's pdf exporter doesn't support bounding boxes, so the exported pdf files do not import correctly. I found this program called eps2pdf (I use the one that comes with a nice Windows GUI because I can't figure out how to invoke the preferred eps to pdf conversion utility "epstopdf") that can take eps files exported from CorelDraw and converts them into pdf documents that can be imported into pdflatex. The program chokes if the Corel-exported eps files have text converted to curves or if the Corel-exported eps files are too large. It's possible to invoke eps2pdf from the command-line to convert all eps files in a single directory, which is useful.

Just recently, I've tried to import a lot of xfig-created eps files into CorelDraw 11. This sounds a little odd, because it doesn't make sense to create diagrams in xfig when you own a copy of CorelDraw. The reason I needed to do this was that I needed to generate a lot of complicated machine-generated Postscript diagrams. I don't understand Postscript, but xfig generates very clean, easy to understand Postscript output. So when I'm in this situation, I draw a couple of simple primitives in xfig, export it as eps, then write a program that follows the template generated by xfig in creating its own Postscript diagrams. Unfortunately, Corel's eps importer imports the text in xfig eps files as curves instead of as text. I managed to side-step this problem by using ps2ai.bat to convert the eps Postscript files into ai Postscript files. CorelDraw 11 then imports the text in these ai files without problem. Unfortunately, somewhere along this conversion process, extra outlines get added to objects. It's possible to simply delete these outlines by hand (especially easy if you use the ObjectManager view), or you can import the xfig eps files directly, copy all of the non-text objects, and then use these objects to replace the non-text objects in the imported ai files. There's probably an easier way to do this, but I'm still trying to figure it out.

Friday, February 04, 2005

Extensibility in X3D

During my rant against X3D yesterday, I wanted to complain about the extensibility model of X3D as well, but I couldn't think of a better approach to supporting extensibility in a 3d file format, so I decided that it would be inappropriate to make a big deal about it.

Of course, I thought up of a better scheme last night, so now I feel justified in complaining about the approach used by X3D to support language extensions. Although I complained about how the X3D event model was too abstract to usefully express certain concepts, I find that the X3D scene graph hierarchy to be too concrete and specific to support extensibility in a graceful manner.

In X3D, scene graph nodes are well-defined types. Each node has specific fields and no others. To add support for different objects, one has to create a new scene graph node with the fields one needs (or insert a lot of meta data, but I'll discuss that later). If an X3D browser encounters a node type that it doesn't understand in an X3D file, probably the best it can do is to simply ignore that node. Unfortunately, this doesn't degrade gracefully. So, for example, consider the mesh objects that are currently in X3D. If I want to make an articulated mesh, I need a way to add weights to each of the mesh vertices describing the influence of various bones on the position of the mesh point. To do this, I need to make a new scene graph node for articulated mesh objects that has a field for holding the bone weights. So now, if an X3D browser encounters my new articulated mesh node type in a file, it won't know what to do and will simply not render it. Ideally, the browser should simply degrade gracefully and render the data as a normal mesh, but it's not possible because there's no way for the browser to know that there's a link between an articulated mesh and a normal X3D mesh. Over time, as more and more node types are added to the standard, the standard becomes so big that no browser is able to implement the whole specification, resulting in many X3D files that are simply incompatible with many X3D viewers. In fact, this can already be seen in the current standard which has different node types for IndexedFaceSets, IndexedTriangleSets, IndexedTriangleFanSets, and IndexedTriangleStripSet. All of these node types are variations on the same theme, but an X3D browser must be explicitly coded to handle all four types separately.

One way around this problem is to use the metadata facilities of X3D. So an articulated mesh could be stored as a normal mesh with all the bone weights stored as metadata in the mesh. A browser that doesn't understand the metadata can just render the mesh as a normal mesh, and a more advanced browser can interpret the metadata and extract the bone weight information. Similarly, all the different triangle sets could be encoded as IndexedFaceSets with metadata suggesting the optimisation of rendering the node as triangles. And therein lies the way to gracefully supporting extensibility in X3D. There shouldn't be set node fields in X3D. Instead, all fields should be metadata. Most 3d objects can be described as a control surface with various extra descriptive data thrown in. As such, X3D should simply abstract all 3d objects to being a base control surface type, and all the extra descriptive data about normals, colours, and shape, etc. should just be metadata. So a cone is a box node that is tagged as a "cone" in its metadata. A height map is just a mesh node that is tagged as being a "height map" with some metadata describing the orientation of the heights. It may not be the most efficient way of storing data, but the benefits in terms of gracefully supporting extensibility is worth it. A minimal X3D browser then only needs to be aware of a small number of abstract node types. Even on very complex 3d models, an X3D browser will always be able to extract something that it can render.

Instead, X3D simply took the flawed VRML model and recoded it in XML. I guess we can always wait until X3D 2.0.

Thursday, February 03, 2005

U3D vs. X3D

I was looking at the website of the purveyor of the X3D format yesterday, and I noticed that they had a newspost slamming the rival 3D format U3D there. I haven't read the U3D spec yet, but based on the newspost, it sounds pretty good. In fact, I think that if the U3D spec had been available when I started my X3D project, I would have used U3D instead.

The newspost complains about how X3D only supports triangle meshes and the like. Honestly though, having just finished implementing a horrible n^4 concave polygon triangulation algorithm for my X3D viewer, I'm liking the idea of a format only supporting triangle meshes more and more.

I've only implemented a small amount of the X3D spec, and I can't help but feel that it lacks a certain elegance and simplicity in its design. I think many of the problems evolved out of the fact that the designers wanted people to be able to code X3D by hand. As such, the spec supports lots of shortcuts and features to aid people coding up X3D manually such as the ability to leave out certain tags or to let the browser automatically calculate normals etc. These sorts of things simply make the implementation more complicated and results in lots of "special cases" in the specification. In reality, no one designs a 3d model using text (believe me, I tried this once. It's totally hopeless), so it would have been better to leave out stuff like that.

I'm not implementing the X3D event model, so I'm not too familiar with it, but my gut feeling is that it is likely too expressive. The Postscript language has a similar problem in that it is a full programming language, meaning that extracting meaning from the language is extremely difficult without actually executing it. For example, let's say I wanted to export an animation to another program. The exported animation will have a clock object whose timing is fed into some sort of coordinate generator whose events would be fed into the actual object being animated or something like that. Would an X3D importer be able to interpret the combination of event objects and event routing as being an animation and to import it as such? Or would it simply have to import these event nodes as is and leave it to the user to figure out that it represents an animation? Just as human language is too expressive for computers to understand, which is why we have programming languages, the X3D model might be too expressive for an import tool to recognize the patterns, which is why more restrictive languages are often more useful. If a language is too expressive for an import tool to understand its meaning, resulting in it being imported as a black box, then the language might as well be some standard language like ECMAScript that people are already familiar with.

And of course, there's my bias against "platforms." I feel that the best specifications are for little self-contained toolkits that can be bolted on to existing applications. X3D was designed as a complete platform for 3D browsing, so it has a tendency to want to take over your application as opposed to being a simple bolt-in. For example, to implement a module for importing animation, you need to add the event model to your application, some sort of reflection mechanism for X3D objects to parse the event code, the actual X3D objects themselves, etc. etc. Afterwards, it's no longer a simple little import tool; you've essentially just written an X3D browser. At that point, you might as well just write your whole application to be X3D-based.

X3D simply tries to do too much and as a result is too complex. U3D focuses on one small aspect of 3D file exchange and hopefully ends up being small, graceful, and easy to use. It's somewhat like the TIFF file format which is so complicated and supports so many features that no one really uses it for anything. It's a coin toss as to whether an arbitrary TIFF file might be importable by a TIFF import tool or not: there might be multiple pages in there, encoded in some weird colour space, compressed using some unknown scheme, etc. With PNG, you're pretty much guaranteed success.