"Don't fall now or we'll both go."

attributed to Layton Kor to numerous of his partners

Disclaimer

IMPORTANT DISCLAIMER: Trusting your life solely to something you read on the internet is just plain stupid. Get corroboration from a more reliable source, use your common sense, don't get yourself killed, and don't come crying to me (or the people I've quoted) if you do.

What's new

Udates are not too frequent at the moment and are mostly done to climbing areas pages. There's Change_log Same info is also available as RSS feed.

North-America pages was given a long-overdue overhaul. At the same time, mountains were moved to dataset structure into which Asian Ranges and South America ranges were moved to earlier. This also prompted a lot more detail to be added to some areas.

While I was at it, I also fixed:

  • css to produce somewhat larger font
  • xsl to not lose id of paragraph. This is needed to retain ingress markup.
  • Changed map link creation so that it doesn't create duplicate map link
  • Some details added to links, tags and media pages.

During last couple of days fair bit of changes. In no particular order:

  • Some details added to Europe area page.
  • A couple of tweaks to css rules for nicer printing (3-col layout).
  • Fixed several issues in mountain main page. Apparently at least chrome has some vertical alignment issues with display: table-cell if the first item of any of the cells is image.
  • Fixed some issues with image formatting for #random-image. Also images loaded into random-image 'carousel' had far less complete captions than the regular images. Replaced old non-jQuery random image script that uses jQuery. At the same time a lot more logic was added to caption string generation so that it should now be on part with static image captions.
  • Several new articles added to index-news (blog). Also fixed a couple of old articles on index-news (blog) that broken the layout due empty elements.
  • Some details added to few books, tags and links.
  • Added intro section to Africa, North-America, South-America, Oceania and Polar Regions area pages.
  • Few fixes in highest mountain page. Also some additions and fixes on related mountain pages.
  • Fixed how country and range are valuated in mountain lists. This way the info can be either directly under mountain or under summits and either element or attribute.
  • Replaced component used to pan & zoom svg images: out with panzoom in with svg-pan-zoom.
  • Yet again had to tweak Exiftool parameters for it not to truncate values.
  • Removed hard-coded formatting of section/@type="book" and section/@type="climb". Added some spans to both to markup meaning of info and adjusted css accordingly. For books some of the information that was previously displayed inline is now hidden and only displayed on tooltip popup.
  • Book info for several books adjusted in Bookcat. For example added some details, removed subtitles. series etc from title and moved that to its appropriate field.
  • Changed all grading pages to 2-column-right layout and added couple of images and formatting for intro paragraphs.
  • Adjusted the book processing logic in query-xsl to fix problems when handling paragraph/@type="book". Some tweaks were required also for image processing xsl.
  • Added new peaks: Cerro San Lorenzo, Tengi Ragi Tau, Kohe Bandaka and several peaks to Hindu Kush. Added lot of details to Cho Polu and Meru North.
  • Several additions to tags resource file.
  • Some reorganization and few other changes on Highest mountains and Alpine 4000ers pages. Some details added to Rimo I and Chogolisa.
  • Fixed some inconsistencies of javascript and css file locations for clearer management.
  • Several big changes in section/@type='media' processing. First of all it now, now media can be fetched from external file which makes it a lot easier to handle multiple references to same media and also makes it possible to use external tools to manage media item metadata with the help of nfo-files. This required plenty of changes in publishing xsl as well as javascript used to produce content popup as I also added several new details to that view.
  • New info added to highest peaks page.

Since some of the mountain listed on my highest mountains table have had their official height changed since I created the table, I thought to update them and expand the list somewhat. One of the key reasons for the update was also to add prominence so that it would be relatively easy laster to adjust what is included in the table should choose to do so. I figured I grab some existing data from Wikipedia and various other sources be be done with with it. However, combining data from multiple sources turned out to be a bigger project than I originally anticipated as I realised my site lacks info for many of the peaks. Also the data was tedious to combine with existing data, as many of many peaks have great number of different spelling options. What ensued was a rather tricky exercise of:

  • regular expression search and replace operations to convert data from various text and pdf formats to usable format importable to Excel for manual fices
  • Next up was importing the data into database and cross-linking various new data with the existing data to new database view.
  • Next step was taking that newly combined data back to Excel for additional manual fixes, and addition on missing data.
  • Final piece of work was to create xml-mapping for Excel. So basically I now have a pretty comprehensive list of the highest peaks data as a xml-file that can be imported to Excel, edited there and exported back to xml.
  • I also throw together some xsl, so I can combine the data created above mentioned Excel with data that already exists in my site. Also mountain lists can use data from multiple sources therefore I don't have to update original resources (immediately that is) in order to be able to use the details maintained in such a dataset.

The beauty of this xml-dataset is many-fold:

  • It can be used to generate mountain sections with basic data for the missing peaks
  • It also can be queried for building that list. So it would be easy to change from prominence cut off from 500m to any other prominence.
  • This would also make it possible to allow on-the-fly filtering by turning the entire thing dynamic. Essentially giving similar featureset than is typically used by data warehouse/big data/business intelligence (insert hype word of your liking here). Not sure yet whether that king of featureset is going to be added though.
  • Of course, the dataset can be updated from individual mountain data, so there's no duplicate maintenance.

This might actually mean that I might move to single large list of just the mountain data and have all of the area pages query that at the publishing side. Should I choose to follow that route the main impetus would be maintenance (strict format of mountain data is much easier to follow when all of the data exists is a single file which could be a fair bit simpler in structure as everything else would be stored on other files). However, this would mean that the main mountain dataset would be several MB in size, not sure how easy it would be to maintain. I don't want to go too far in the real of data normalization (such as separating mountain data and route data). The main benefit would be to simplify the data structure quite a bit, but as the data is entered and maintained by manipulating xml-files and generally on mountain or area basis, this separation would make that a lot more tedious by forcing me to maintain several files.

I tested my mountain database idea by creating a separate xml-file containing just the mountains for Pamir. The data is partly maintained by Excel sheet for basic mountain level data, references and route details are copied from old article data. The number of peaks is many fold at least for the time-being compared to old. However, the peaks with no real data will probably be either hidden completely or at least shown a lot more compact manner. I have also veen playing around with various xslt-tools to convert data back and forth between my xml-format and Excel. I also found out that generating kml-file out of this new mountain dataset is very simple and that can be imported to uMap, Google maps etc very easily. Therefore its likely that there will will be kml file created for each area and possibly several pre-made area maps.

Given the success, all Asia and South America peaks were moved to this dataset structure and related changes were applied to all affected pages and xslt. At the same time, some structural changes were also applied to those same pages and some areas also got more information.

During last couple of weeks fair bit of changes. In no particular order:

  • Pretty large overhaul of some of the area pages. Several areas in Europe, Asia and Africa have got more info. Africa has been detached to its own page and lots of information was added. Also Hindu Kush and Hindu Raj have been detached to new Asia/Afghanistan page and Chinese ranges to new Asia/China page. Major updates to Pamir and Tien Shan pages.
  • Several fixes in Highest mountains and Alpine 4000ers lists. Or to be more exact, in xslt logic that is used to populate those pages. Also some of the underlying data got several fixes.
  • Several tweaks to css rules for nicer formatting, particularly in mountain pages. Also 1-column layout now uses new @media="print" rule set, which produces somewhat different output. Probably going to be implemented to other layout variants as well. The plan was to implement much more sophisticated print template,m but found out that supports for css3 paged media module doesn't really exist.
  • Fixed several issues in image caption generation, particularly regarding to outputting license information.
  • Fixed map-link generating javascript to use OpenStreetmap also in address search scenario
  • Added new functionality to pull data from elsewhere. Essentially this is extension to functionality that has existed before.
  • SVG images are output in object tag to allow interactivity. This is in use for example in European Alps area map (old-fashioned image map replaced with active svg). Apparently there are some scaling issues.

I've thought about adding image carousel control quite a while and even tested some of the potential looking candidates. However, most of them seem to require the images to be of equal dimensions, which makes them unsuitable for me. Some digging around revealed the following most promising candidates:

So I gave a closer look. Despite not too flashy demo, Lemmon Slider seemed like a pretty good fit to my needs, it just needed a bit tweaking:

  • make navigation tools look nicer. Quite a bit other tweaks as well on CSS side to make it function better when images are standard height but there might be variable length caption to make boxes height variable.
  • remove navigation tools from source and generate them dynamically to make the whole thing faster to implement and cleaner
  • remove static calls to specific id's and replace that with looping for all divs with specific class name

I've been contemplating adding maps and topos is svg-format for quite a while. Lacking and inconsistent support for svg has been one reason that has held me back, lask of some of the features I'd like to have has been the other.

One obvious feature that would be useful for maps is zoom. Since svg is vector format, it scales nicely natively. However, in order to actually put that possibility to work, some form of scaling mechanism is required. Apparently there are two principally different ways to achieve this:

  • Using scripting capabilities of svg itself, embed zooming functionality directly to svg-file. The article Pan and zoom control by Peter Collingridge shows this approach is work (the same site also has various other tricks regarding to svg maps).
  • Using controls on website to manipulate plain-vanilla svg's. Generally this means Javascript library or plugin to javascript framework such as JQuery.

The latter approach is principally better IMO, as it does not require any website specific content in the svg itself. Not quite sure how various drawing tools react to scripts for example. However, finding good solution using this approach turned out to be more difficult than I though.

For zooming to make much sense, I feel that it needs to take place within fixed size container, otherwise the layout breaks. This makes zooming a bit more complicated as instead of just zooming, the solution needs to pan & crop as well. Actually the only real way to zoom maps is far more complicated proposition than simply zooming, as the amount and size of details should also react to zooming level to keep the map readable. For the sake of my use cases, this aspect can probably be ignored.

Frustrated while searching in vain for a perfect solution, I gave native svg-solution a go. fairly simple and works, but not without pitfalls:

  • No panning with mouse
  • No zooming into focal point (mouse)
  • No zooming with mouse-wheel
  • SVG dimensions must not contain unit
  • Initial size and position doesn't seem to like to play ball. Additional zoom and pan directives were required to fix this.

So all in all, a workable solution, but not perfect.

Since this site uses jQuery, If I am to go with Javascript solution it should be either JQuery plugin or plain-vanilla javascript. Quick search came up with several possible solutions:

After some testing, I ended up going with jQuery.panzoom with some modifications. First of all I fixed blurry quality by applying css hack. Second, I tweaked a lot to make the script a lot more unobtrusive than provided sample. All js (including loading of plugin) and css is moved into external files. Also navigation tools and wrapper elements are removed from source and generated dynamically. Thus my solution requires only one class value in source file to hook image with up with zooming, all the rest is applied dynamically. Also tweaked fair bit with zooming controls to make them fit better with my tastes.

I also gave some tests to leveraging image map possibilities provided by svg. It turns out pretty nifty image maps can be done remarkably easily. However, most of the samples I found out come at a cost of polluting svg with bunch on onmouseover etc. eventhandler here and there in the svg contents. I find that to be a non-starter, as at the very best, they make editing cumbersome, particularly if the native format of the file is not svg. And even if it had no practical reasons whatsoever, keeping such interaction away from the actial content makes managing it a much simpler in the long rung. However, in the short run it may not be the case. TThat being said, few hours of googling and trial and error I was able to pull together a Ecmascript code (to be stored within svg) that contains all the logic required for standard image map functionality (links and highlighting active area) with zero event handlers, link urls or anything like that within the graphics itself. And the code is very easy to adjust for different image maps.

Sometimes huge projects start with barely anything. I read an article about a climb in Karakoram. I did some checking to figure out what kind of peak and where exactly it is, and found that this site's section about Karakoram was not very comprehensive. So I figured I would do a slight overhaul to Karakoram section. Already this ended up taking a bit more time and effort I was anticipating.

However, it got way more out of hand when I added bunch of links to AAJ, Mountain Info etc. I had thought to add some mashup magic for such links quite a while but never got to it. Now I bit the bullet and did it. While doing so, I also added a couple of shorthand formats to make editing source files faster. These changes also made me go through some of the xsl code I use to turn source files into final xhtml. I ended up doing quite a bit code cleaning as well ans consolidating, particularly microformat generation is pretty well consolidated now.

Finally I had static links to Google maps from mountain s and few other items. I replaced most of the static map links with dynamically generated ones. This allows easier site wide manipulation of such functionality. Which prompted me to play around a bit with tweaking query parameters. While doing so I found out that New Google Maps doesn't like to use terrain view with queries even if it is specifically instructed to do so. Fuck that. So mountain maps links now hook up with OpenStreetMap instead of Google maps.

As a side product of tweaking with references to resources like AAJ, I also played a fair bit with css regarding how to present such information. I ended up with inline list, which make the info more compact. While I was at it, I also tweaked several other aspects on mountain pages. Mostly the changes are quite subtle as sometimes not visible at all (moved punctuation etc from xhtml generation to css). I also replaced some old icons with Font Awesome icon font.

I had been using heavily customized version of layout system based on 3 columns fluid layout for quite a while. Essentially it was using modified version of faux column technique to achieve equal height layout. It is quite robust and well compatible with old browsers. However, it uses several wrapping divs, floats, clears and negative margins as well as several hacks and tricks to achieve it. So basically it makes principally simple layout not really that simple. I decided that support for browsers like IE 5.5 in this day and age is no longer very important, so I went on and replaced the layout system with far simpler one relying on CSS display table property to achieve pretty much identical layout with far less and far simpler css code. While I was at it, I threw away JavaScript and css squarely targeted at very old versions of IE. The current system should be solid from IE8 and may work as intended starting from 5.5. If the browser does not support css display table instructions, the site should still remain perfectly usable (as it degrades to single column layout). The change also made the html more semantic be removing non-semantic wrappers as well as solving few incompatibility issues with some jQuery components (excessive use of floats in the layout seem to have caused unexpected behavior of image display systems using floats themselves).

This one got seriously out of hand. Originally I intended to change menu component to something that would work nicer with smaller viewport devices. So old went the old workhorse (heavily adapted dTree Javascript menu system), in with the SmartMenus jQuery plugin. This in itself was not too big of a task, but its side effects certainly were. Not that I did not have menu system that takes up an entire column, I needed to figure out how to meaningfully populate the columns as one of them was suddenly free. For the home page blog-thingy I ended up finishing menu post navigation tree (actually an accordion list) for main pages. This also exposed related post feature that has been around some time but was never exposed in any way. Doign so made me aware of some room for improvement to make list of related post look nicer. New layout also made it easier to make layout more responsible to screen size, so a lot of tweaking of css's were in order to make the site look and perform better with smaller screen sizes. By doing so, I also discovered several minor issues is publishing xsl's which I ended up fixing. While I was at it, some xsl code were improved and refactored to make it possible to get rid of partially duplicate code. Finally some data issues were fixed as well.

Due to platform change on my service provider I apparently no longer have access to Perl which rendered some of the scripts (site search and feedback form) obsolete. So I have to replace them with more contemporary solutions as well.

Lot of changes here and there to add support for JQuery plugin Elastislide to produce scrollable image bar. Also several hicups have been fixed in publishing xsl's.

Externalized product info to separate product database file. Info can be pulled from there to articles. Accompanying changes in publishing xsl's. Some minor new features regarding to hProduct as well.

I changed the layout of mountain pages to nicer looking layout. Also lots of informations was added to Cordillera Hyayhuash mountain page.

I changed the layout CSS to responsive (columns are reordered according to screen size). At the same time some tweaks in CSS files affecting the formatting of actual content. Few pages wer switched from two-column layout to three-column layout. To facilitate these changes, several smallish tweaks in publishing xsl's.

Lots of additional details added for Cordillera Blanca.

Externalized Himalaya mountains to separate page from Himalaya main page. Rather thorough overhaul of Indian Himalaya and eastern Himalaya sections with lots of additional information and complete structure change to allow for far more detailed approach.

Some CSS display and nth-child() trickery to produce more compact output of mountain info.

I externalized Karakoram, Pamir and Tien Shan section from the general Asia ranges page to keep the file size somewhat reasonable. At the same time, several more high peaks were added to these areas, particularly for the Karakoram page which was also otherwise updated by a lot of information and peaks divided into subranges.

I also started my long standing project of cleaning up publishing xsl by moving geo microformat creation to a single template and changing all instances where geo microformat is created to point to it. The same should be done for generation of all microformats, particularly for vcard.

Oh yeah, the highest mountain table was also completely overhauled as several more peaks were added and all local details were removed as they are pulled from the mountain details pages. This also lead to some fixes in publishing xsl's.

While I was tinkering, I also added a couple of css rules to add service icon to context menu for some eye candy.

Several more technical changes:

  • Moved stylesheet branching functions to a single javascript function for easier management.
  • Replaced tablesort.js with Tablesorter.js (jQuery plugin) for even more unobtrusive code. Because of this several unused css rules were removed and publishing xsl needed to be changed somewhat.
  • Lots of improvements for movie handling on front page.

Lately I have been considering adding tooltips on steroids on this site to be used instead of standard tooltips at least on front page to give more details for links attached to side bar. The difficulty in finding a javaScript that would do this cleanly and flexibly enough finally made me bite the bullet and resort to jQuery JavaScript framework to be able to use qTip2. All previous JavaScripts including all Ajax calls have been done without resorting to any frameworks to keep the code maintained by me.

I did lots of changes compared to examples presented on qTip2 for cleaner usage though:

  • all qTip css and Javascripts are injected and executed by JavaScript loaded upon page load only if html page explicitly tells it to do so to keep load times at bay
  • this also makes the page very clean and unobtrusive, the only added stuff to html is one variable defined and a local JavaScript and one more variable defined in body onload() function.

While I was at it, I also moved all stylesheet branching logic (previously external js for Mozilla and IE conditional comments from html to pageLoaded js.

As a fortunate by product of this, I could also remove references to several js-files from html and load those javaScripts dynamically from js that is performed upon page load

  • tablesort
  • menu building
  • random quote

Since I mainly use xml tools to write source pages and I routinely use auto indentation of source documents and xsl's used to write output (very handy to make xsl much easier to understand), I guess I was bound to encounter unfortunate behavior of whitespace with nested elements. This is frustrating indeed, as microformats, for example, require container element and elements within that container to contain particular elements of said microformat. This leads to whitespace issues if you intend to use the microformats within text chapters. For example <span>\n<span>...</span>\n</span> produces a single whitespace after the inner span. If the next character is something that should appear immediately after the contents of the said span (say dot for instance), it would have a very out-of-place looking whitespace before the dot. As far as I know, there are two remedies to this:

  • Set indent="no" in your xsl's output definition. This will make the source code of output document look awful and highly unreadable for humans. Which is unfortunate should you need to see the code in understandable format for debugging purposes.
  • Not to mix text content with elements. Not a great fan of this approach as it would require otherwise completely unnecessary xhtml markup (read: contain plain text elements with span or something similar).

If source code viewing of browsers were up to the task at hand the lack of indentation wouldn't be a problem. But none of them has the ability to indent source code, so it is a problem. That being said, they all contain some sort of developer tools which do code indentation, so the former road is probably lesser of the evils.

Some major restructuring of css.

  • Consolidated few separate css-files to one file. This reduces the number or necessary @import instructions.
  • Consolidated duplicate css instructions.
  • Reorganised styles into separate sections for:
    • standard html elements (not targeted with class, id or name)
    • document template parts targeted with Id
    • document logical parts targeted with class (header, footer, search form, feedback form etc.)

Tweaked menu creation js slightly to use correct set of icons. Added new variable so that icon set is easier to change.

Removed pretty much all the included JavaScript from html pages and moved them to external files. Replaced JavaScript image hover effect effect with CSS image hover effect. Simplified search function by getting rid of unnecessary (and ugly) button and inserted help text instead.

I finally got around to fix some incompatibility issues with change log rss. Turns out it a worthwhile exercise as it taught me few new tricks xsl's format-dateTime function can pull off. While I was at, I also fixed some typos, a broken link or two, got rid of unnecessary whatsnew.cgi perl script and updated site description in metatags a bit. I also fixed css issue with html version of change log and some minor issues with search function output layout. And oh yes, I also uploaded the changes that I did several months ago.

I have seem to have been pretty lazy in updating the change log. So here is probably non-exhaustive list of recent changes:

  • Many changes, updates and additions in Valais Alps pages. Structure has been adjusted both in mountains page and in valleys. Larger and/or more important valleys are now broken down to starting points, more or less anyway. Also the route lists have been much updated. Furthermore all sorts of additional info has been added, especially loads of references to guidebooks; so much so that currently the page starts to look like a cross reference between various guidebooks covering the area. Finally, I added some interaction in a form on web 2.0 context menu which has been present on some of the other pages quite a while ago. Also, I added title image.
  • Loads of changes and additions to Grading/Ice. Most importantly, the comparison tables between WI technical grades and other ice climbing gradings have been updated (and adjusted), also descriptions of M-grades have been adjusted and new info is added. Furthermore, some new routes have been added to sample lists.
  • Du to changes and additional details in Valais Alps mountain page, updated Grading/Alpine to contain updated info.
  • Addition of a title image to Valais Alps page made me realize that there were some errors and omissions in JavaScript I use to dynamically insert image caption from external xmp file to pages. So I had to roll up my sleeves ad fix those as well.

SSI script to include random text has been replaced with ajax version.

JavaScript menu tree has been externalized. Previously JavaScript used to produce navigation menu tree items was included in the actual page, it has now been moved to external JavaScript file required some additional logic to js itself). This makes handling changes more straight forward, simplifies the xhtml page code and also decreases the file size. In the name of accessibility I also added fallback mechanism to produce simplified navigation menu tree for browsers with no JavaScript support (or JavaScript support turned off).

Well, this seriously sucks. Turns out Firefox is happy to devour xmp-files using responseXML (XmlHttpRequest) from the local drive but doesn't want to process the very same files from the web server. After loads of trial and error, I found out that overrideMimeType to text/xml is the rescue.

I finally took the time to write a Ajax-code necessary to pull random quote from an xml-file. I've been doing the very same for years using Perl-script, so the reason for all the trouble was to get my head around on how to use XMLHttpRequest, a cornerstone of Ajax. This isn't used at6 the moment though.

As a side product I started to ponder whether I could use the same technique to randomly pick an image. Well, to pick up just the image would be easy (doesn't really differ from randomly picking up the quote). What differs though, that in order to properly handle the image, I would need to have access image metadata as well (for caption) and size attributes. Given that I had earlier employed Exiftool to automatically generate xmp-sidecar files, I have that data available in xml-files. So basically what I needed to do was to

  1. Select random item from the list containing a list of files I want to rotate
  2. Request the xmp-file of the chosen image and extract the metadata
  3. Return properly formatted html to be written into specified div within the calling document

Sounds simple enough. And surprisingly, it didn't turn out to be much more complicated that it sounded. Well, in principle anyway. Once I figured out how to traverse the xml-structure returned as XMLHttpResponse, pulling off the functionality I was after in a separate proof-of-concept project was easier than anticipated. Xmp-files contain several namespaces, something that I fully expected to cause major headaches. Turns out they don't. Ok, so far so good.

My next step of action was trying to implement that in real-life scenario. Which is about the time when I ran into some peculiar functionality. Turns out I had tumbled on notorious domain restrictions of Ajax. Well, sort of anyway. Everything is part of the same domain, but trying to read from file using relative addressing so that I would have to go one directory level backwards just didn't seem to work. Which is a major PITA as I pretty much would have to do exactly that. After considerable amount of frustration and wasted time, I found out that this issue doesn't really exist; it is only encountered when not running the site on a web server.

That being said, I don't particularly like to run a local web server just for this very purpose. Luckily, it isn't too difficult to circumvent (well, few things are if you happen to know how to). In this particular case I found that in Firefox, you can bypass strict local file security policy by going into Firefox config panel (type 'about:config' to address bar) and set property "security.fileuri.strict_origin_policy" to 'false' by double clicking on it. That allows you to continue to access local files that are 'outside' of the directory structure you loaded from via XHR. IE seems to have the same limitation, not sure whether it can be circumvented somehow (and if it can, how). Neither Opera or Chrome seem to have such a restriction. Anyway, now that this is sorted, I'll probably tidy up the code, then implement it so that I can rotate images on mountain page.

When tinkering with that, I found out that this site causes quite a few CSS errors, which clutters browsers error console, thus making it more difficult to track down real issues. At least the majority, if not all, CSS issues are caused by hacks used to overcome and circumvent browser-related bugs. It is very likely, that I will get rid of those hacks in the near-future; if someone still used either IE 5.5 or old Netscape browsers, they are clearly gluttons for punishment anyway. The same probably goes for conditional comments used to target various older IE versions, they make the code cluttered and difficult to edit with some editors. Anyway, I moved some dirty IE hacks from the main CSS to IE targeted CSS-file, which minimizes the implications of the issue.

So much for my namespaces not being a problem. It appears that you can make namespaced files to work either in

  • Firefox and IE using getElementByTagName("namespace:element"), which really isn't correct according to spec
  • Firefox, Opera and Chrome (probably loads of others) but not in IE by using getElementByTagNameNS("namespace", "element"), which is correct according the spec but does not work in the most common browser
  • Cross-browser by applying either browser-aware processing in JS, some sort of hacks or implement external wrapper component (such as Sarissa) that make it work cross browser.

Obviously any one of these options is poor.

I finally got my around on how to retrieve details from external xml-file using ajax instead of cgi. I might very well use this approach later on to replace old cgi script currently used to pull random quote from text file. The solution seems to work in Firefox, IE and Chrome.

However, probably the more important feature this opens up is that I think I have what I need to rotate images randomly. Well, just to rotate images randomly would be far simpler, but this approach give me access to xmp sidecar-files which is necessary for me to be able to use details from image metadata stored in aforementioned sidecar files without the need for any sort of help from server-side software. There's still quite a bit of things I need to solve before that is possible though (most importantly having to figure out how to use different namespaces and traverse rather complicated xml-structure with DOM and JavaScript).

Several improvements in layout and functionality

  • Nicer looking and more compact layout of mountain details.
  • Fixed and simplified several processing functions in publish scripts, especially related to images.
  • Changes publish scripts to use relative paths and inline tools (Saxon-B and Exiftools) to make whole site including publishing stuff self contained.
  • Added some new links
  • Addedc new images to mountain pages and replaced some other images with new ones. Extended the usage of xmp metadata.

Several improvements in layout and functionality

  • Nicer looking and more compact layout of route details.
  • Automatically generated events table on mountain level.
  • Adedd Mountain Project to context menu
  • Added links to Google Maps for peaks with gps details.
  • Added display of route type

I finally got around of implementing support for image metadata. My solution uses ExifTool to extract embedded metadata from images and storing it into xmp-sidecar files. This is done as a batch process using Windows batch file. After that publishing xsl pulls the details from xmp-files to be shown in the site.

Lots of updates in Bernina page.

I went on and replaced the "web 2.0" links available on some of the pages with JavaScript menu, which I reckon is less obtrusive and ultimately better from the maintenance point of view as well, as it would not require changes in markup. The script used on the site is based on JavaScript Context Menu by Luke Breuer. The sample has been changed quite a bit though. Unfortunately my tweaking seemed to have broken compatibility with Internet Explorer 7, though. I tested the script with Firefox (3.0.7 and 3.1 beta), Internet Explorer 8 and Safari 4 beta. The best of this approach is that whenever the integrated services syntax requires changing or if I want to add new services, all I need to do is change the JavaScript. At this point, the integrated services are:

Links are now currently automatically generated on mountain list. This requires help from Windows Powershell script as xslt is not capable of processing file tree. Powershell script basically gathers the list of files contained within a single directory and stores the result as export-clixml -format directly available through Windows Powershell. The rest is handled with xslt that reads a name of a peak of the table, then finds out in which document that peaks exists and generates a link to it.

I've also been playing with the the idea of replacing the "web 2.0" links with JavaScript menu, which I reckon would be less obtrusive and ultimately better from the maintenance point of view as well, as it would not require changes in markup. Given that the markup is automatically generated at the publishing time, this isn't very important at all though. I have a working solution taking use of RightContext script of Harel Malka. However, I would like to trick that up a bit to get rid of some attributes which currently need to be in markup (these would be pretty trivial to generate automatically in the script in my use case).

Finally, there's been several additions and corrections on many mountain pages.

I added plenty of new functionality for creation of mountain and route lists. These can be used to gather data from mountain page to generate automatically list of peaks located on a specific area to main article page grouped by location (sub-area) with automatically generated links for ease on navigation. All this is is achieved through rather complicated xsl(t) and xpath trickery. Xslt 2.0 just kicks some serious but.

Books now link to Google Book Search.

Route and mountain lists now pull information (such as location, grading and effort) from mountain page. Of this info, currently only grading info is used. Site now support adding crumb path to headers (currently only used on North America -page). At the same time, the generation of those lists is now more sophisticated in linking, it now only links the mountains and/or routes if the mountain/route actually exists on this site. If not, plain text is shown. This addition also had some impact on layout.

I went on and added more web 2.0 functionality in a form of "mashed-up search". Basically this adds capability to launch direct context-sensitive searches to Flick, Picasa, Wikipedia or SummitPost. This feature is currently only used on North America page.

Since this site is hosted by a web host not providing me access to imageMagick or any other similar tools, I have no way to read metadata directly from the image files to be used on website. For quite a while I've been searching for a solution making it possible for me to read the metadata embedded into files in my photo cataloguing apps rather than having to enter the details outside of the files. One might guess that since photo management gods at Adobe are using their xmp extensively in their tools such as Lightroom, Photoshop and Bridge, there would be host of apps able to write all metadata to sidecar xmp files. One would be wrong. Well, there's no real shortage of apps capable of outputting xmp sidecar files for Raw images. But when you need an app capable of writing xmp sidecar files for jpg files, the pickings are significantly slimmer. Granted, you are supposed to write xmp info into jpeg itself, but having that info there is not a great help when trying to output that info into web page. Some apps are capable of writing xmp sidecars for single jpeg files, but try to do that for several images all at once and your options are very limited indeed. Luckily, ExifTool with ExifTool GUI is up to the task at hand. Seems that I no more have an excuse for not implementing this.

I recently noticed that Internet Explorer (including Internet Explorer 7) did a pathetic job of displaying this very site; (at least) all unordered (ul) and ordered lists (ol) were displayed incorrectly. This seems to be caused by IE:s inability to process such elements correctly whenever they are located within floated elements. Which is rather sad given that many pure-CSS layouts rely on floats to build the layout. Which is the case with this site as well. To make matter worse, there's no real solution to remedy this. Fortunately upcoming Internet Explorer 8 (currently available as beta 2) seems to finally fix this.

However, since lists are heavily used on this site and lack of bullets and inproper indents can seriously impair the readability of some of the pages, I added IE conditional comments along with CSS targeted to IE7 that fix this problem. At least up to the point where the layout is at least pretty close to what it should look like.

I also changed the mime-type to application/xhtml+xml which is what is recommended for xhtml 1.1. I am well aware that this may cause issues with old browsers. Furthermore, references to xhtml 1.1 schema are now added to html-root element.

I decided it was time to move away from my old and trusted work-horse, namely xslt 1.0, and replace it with it's more powerfull big brother, xslt 2.0. Granted, the support for xslt 2.0 is seriously lacking, but since I am only using xslt on the publishing side, that doesn't really matter. Therefore, off with msxml and in with Saxon B. To let xslt 2.0 flex its serious processing muscle, I also implemented loads of internal linking, automated link generation and grouping. Pennine Alps page is the only place where new and improved xslt stylesheets are in use at the moment, however they replace all xslt 1.0 stylesheets shortly.

Change of xml parser also prompted me to change the publishing procedure, formerly I used Windows batch file and msxsl command line tool to perform necessary transformations, with Saxon the principle is the same, but Saxon being Java-application, some changes are necessary to publishing scripts as well.

There has been several changes, both to related to cleanness of the maintenance and features.

I did several changes in publishing xsl's to clean up unnecessary functions (migrated with already existing ones), added new functionality regarding to processing of history events (work now better together with Operator), added internal documentation along the principles of XSLTDoc, all referenced documents are now handled through variables. Furthermore,. there were changes to show references on features and routes. Final, and the biggest one, of the changes was a linking between starting points (basic info on article page) and mountain page. This is still somewhat work in progress, as currently only altitude of starting point is brought over to mountain page.

Because references were added to features/routes, css had to be adjusted as well.

Loads of details have been added to Pennine Alps pages.

Fixed structural errors in Central Alps section of Eastern Alps page. Added and corrected information on Pennine Alps page.

Well, not so much in the Alps themselves, but the page has undergone some rather substantial changes. Most importantly, Pennine (Valais) Alps is now separated into separate page, undergone plenty of structural changes and had quite a few updates. There have been some corrections and additions on the main Western Alps page as well, but nothing too drastic at this point.

I recently went on to transform Eastern Alps page from old hand-written xhtml format to newer xml source format which is then converted to output xhtml by xsl-transformation. The idea is to get all the pages to same source format which makes it much easier to reuse data, query it and change the appearance should I choose to do so. However, converting the bulk of hand-written (x)html files to proper xml is painstaking to say the least, mostly because my poor choice of not not nesting all elements that would group all things belonging together under common parent-element to save me from writing few extra html-elements per hand.

Thus, xsl-transformation (well, at least not xsl 1.0) is really not up to the task at hand without some serious envelope pushing, possibly involving extension functions. Since this is one time work anyway, since as soon as I have proper xml-documents I can easily use xsl for any format conversions, I started to look at alternatives. Not being a real programmer, I decided on NoteTab Light's clip book feature, which let's you easily script some serious search-replacing using the infinite power of Regular Expression. With all fairness though, I have to say that RegEx sports one seriously steep learning curve. However, once you get the hang of it, it's kicks some serious ass though.

Some changes in css code that apply to event lists.

Created new resource file: tags that is used to tag articles, photo galleries etc in a controlled matter, much of the style of controlled vocabulary. This is the first incarnation of such system to be used on this site, which is more than likely to get revisited sometime in the future. Most likely area to see improvements is handling or hierarchy, now there's none of that.

Since I don't seem to find inspiration to finish what I started with photo galleries, I fixed the gallery page to at least make gallery pages somewhat usable. They haven't been functional for quite some time.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file. Added generation of attribute id in items in list type "item" to facilitate internal linking. Added partial support for . Small change in content css file.

Since I have recently employed quite a few techniques, I decided to continue the trend. This time around, I added of sorts, namely I exported newly geo-tagged locations from Glockner Group page (Hohe Tauern) to Google Maps map and embedded that map to said Glockner group page.

While I was at it, I also implemented a couple more , this time and . This essentially forced me to employ tagging as well. Next time I feel the need to review something, its likely to mean the inclusion of as well.

Finally, I fixed some bugs in RSS creation (change log), added tiles to index page entries, and changed the presentation of the said entries slightly.

I decided it was time to go more semantic by implementing microformats. In more exact terms, I implemented hCard (html vCard) to item list type, which I use to contain information about huts, hotels, lifts, tramways etc. The big idea behind hCard is to allow browser to recognize contact information on a web site so that it can be easily picked up and exported to vCard, common standard to store contact information that can be exported from and imported into common contact management applications, such as Microsoft Outlook or Google's gmail. Somewhat related, postal address works poorly in the mountains, therefore coordinates are much more usefull in locating huts etc. Especially if you use gps device. Luckily, there's microformat for that as well, namely geo. My own homebrew linking system seems to be not too far away from xfolk So I might change the site a bit so that it takes advantage of that as well.

Forthcoming Firefox 3 and Internet Explorer 8 are likely the first browsers that can handle such microformats out-of-the-box. That being said, there are already plugins for current browsers, eg.

Like I mentioned earlier, I wasn't too happy with One True Layout layout I had adopted and used on my site for quite a while. Main issues for me were:

  • Page clipping when using inline anchors. Obviously very bad, since this site is very big on intra-page anchors.
  • Css code used to produce One True Layout is very complex and I don't understand all of it. Take my word for it, not the best position to be in when trying to maintain the site.
So it was time to look for something new. After some window shopping, I finally decided to go with the principles introduced in the article 3 columns fluid layout at TJK design website relying on Faux Column technique and stylesheet branching to keep things compatible and manageable.

Being anal retentive tinker I am, I couldn't go with the solution as it was, of course. Instead I had to roll up my sleeves and incorporate some changes. Most important of them being:

  • My version of the layout is elastic all around, rather than using pixel dimensions as was the case in the TJK Article.
  • Also, since I think modular css is the best thing since sliced bread, I divided stylesheets into parts:
    • Main stylesheet that contains just layout instructions (that is to say positioning stuff), and not a single instruction to affect formatting (stuff like fonts, colors etc.). There are in fact several layout stylesheets, currently one for properly css-aware browsers and one for less css-savvy7 browsers. Furthermore, there is also separate print stylesheet (@media is extremely handy, btw). Current incarnation of the print layout is not quite what I'd like it to be, mainly because I had to resort to absolute positioning to work. Also, at the moment the design breaks in Internet Explorer.
    • Formatting instructions are placed in separate stylesheet(s) that are imported into layout stylesheets. The beauty of this, is that should I want to change the layout part, I don't have to touch formatting part at all. Much more manageable imho.
  • Original version uses well known faux column technique to create equal height columns effect. I didn't want to put mostly transparent background images on right column to finalize that effect in case menu column is the tallest, thus my column separator line between main content area and right sidebar does not scale in case the menu column is the tallest.
There are very few changes required for style sheets to make it two column version, which is likely to be the main workhorse on this site. I am also considering employing javaSacript to make it possible to hide/unhide menu column to have more space for main content.

To go on and actually implement the new layout, there are some things that I need to do:

  • Some images originally intended to be used on black background need to be adjusted in order for them to look nicer on light background
  • Have to publish all pages at once or leave old layout styles as well.
  • Verify the impact on legacy pages.

Yesterday aside of doing work on info page, I also climbed past few crux issues that stopped me before.

  • Bookcat stores descriptions with multiple paragraphs as text strings where line feed character is used to divide string into chapters. The descriptions is also exported like that, so instead of getting the data neatly paragraphed from Bookcat, I get an element with text string containing empty lines. Unfortunately xml parsers don't much care about white space, so that is shows as continuous string without any paragraphs. There would be some solutions that might be a tad easier to implement, but I was persisting on going the whole nine yards, in this that would be reading input (i.e. Bookcat export file) a paragraph at a time and putting each paragraphs into separate child element under main description element. This makes it easy to create proper paragraphing when transforming to xhtml. The trick to do that is to use recursive template that checks if two consecutive line feeds are found, and process conditionally from there. If they are, use substring-before to create first paragraph, and have the template call itself with substring-after as a input string. If no consecutive line feeds were not found, just create a paragraphs and exit. Things are generally easy when you know what to do.
  • Same approach worked for getting file part name out of path as well (split by /) which was key ingredient in enabling the same sort of "metadata mining" for processing image files I used earlier for book covers. With some rewrites and few added functionalities, I was able to combine two image processing functions into a single one. The beauty of this is that is now much easier to add new functionality as I only have to change one function. Capitalizing on that, I added support for some more metadata fields. There is still some improvements that need to be done in the final output part (most importantly combining two image processing functions there as well). The last remaining major thing to add in image processing is to add support for images with thumbnail and full res images.
Yesterday aside of doing work on info page, I also climbed past few crux issues that stopped me before.
  • Bookcat stores descriptions with multiple paragraphs as text strings where line feed character is used to divide string into chapters. The descriptions is also exported like that, so instead of getting the data neatly paragraphed from Bookcat, I get an element with text string containing empty lines. Unfortunately xml parsers don't much care about white space, so that is shows as continuous string without any paragraphs. There would be some solutions that might be a tad easier to implement, but I was persisting on going the whole nine yards, in this that would be reading input (i.e. Bookcat export file) a paragraph at a time and putting each paragraphs into separate child element under main description element. This makes it easy to create proper paragraphing when transforming to xhtml. The trick to do that is to use recursive template that checks if two consecutive line feeds are found, and process conditionally from there. If they are, use substring-before to create first paragraph, and have the template call itself with substring-after as a input string. If no consecutive line feeds were not found, just create a paragraphs and exit. Things are generally easy when you know what to do.
  • Same approach worked for getting file part name out of path as well (split by /) which was key ingredient in enabling the same sort of "metadata mining" for processing image files I used earlier for book covers. With some rewrites and few added functionalities, I was able to combine two image processing functions into a single one. The beauty of this is that is now much easier to add new functionality as I only have to change one function. Capitalizing on that, I added support for some more metadata fields. There is still some improvements that need to be done in the final output part (most importantly combining two image processing functions there as well). The last remaining major thing to add in image processing is to add support for images with thumbnail and full res images.

Yesterday aside of doing work on info page, I also climbed past few crux issues that stopped me before.

  • Bookcat stores descriptions with multiple paragraphs as text strings where line feed character is used to divide string into chapters. The descriptions is also exported like that, so instead of getting the data neatly paragraphed from Bookcat, I get an element with text string containing empty lines. Unfortunately xml parsers don't much care about white space, so that is shows as continuous string without any paragraphs. There would be some solutions that might be a tad easier to implement, but I was persisting on going the whole nine yards, in this that would be reading input (i.e. Bookcat export file) a paragraph at a time and putting each paragraphs into separate child element under main description element. This makes it easy to create proper paragraphing when transforming to xhtml. The trick to do that is to use recursive template that checks if two consecutive line feeds are found, and process conditionally from there. If they are, use substring-before to create first paragraph, and have the template call itself with substring-after as a input string. If no consecutive line feeds were not found, just create a paragraphs and exit. Things are generally easy when you know what to do.
  • Same approach worked for getting file part name out of path as well (split by /) which was key ingredient in enabling the same sort of "metadata mining" for processing image files I used earlier for book covers. With some rewrites and few added functionalities, I was able to combine two image processing functions into a single one. The beauty of this is that is now much easier to add new functionality as I only have to change one function. Capitalizing on that, I added support for some more metadata fields. There is still some improvements that need to be done in the final output part (most importantly combining two image processing functions there as well). The last remaining major thing to add in image processing is to add support for images with thumbnail and full res images.
Yesterday aside of doing work on info page, I also climbed past few crux issues that stopped me before.
  • Bookcat stores descriptions with multiple paragraphs as text strings where line feed character is used to divide string into chapters. The descriptions is also exported like that, so instead of getting the data neatly paragraphed from Bookcat, I get an element with text string containing empty lines. Unfortunately xml parsers don't much care about white space, so that is shows as continuous string without any paragraphs. There would be some solutions that might be a tad easier to implement, but I was persisting on going the whole nine yards, in this that would be reading input (i.e. Bookcat export file) a paragraph at a time and putting each paragraphs into separate child element under main description element. This makes it easy to create proper paragraphing when transforming to xhtml. The trick to do that is to use recursive template that checks if two consecutive line feeds are found, and process conditionally from there. If they are, use substring-before to create first paragraph, and have the template call itself with substring-after as a input string. If no consecutive line feeds were not found, just create a paragraphs and exit. Things are generally easy when you know what to do.
  • Same approach worked for getting file part name out of path as well (split by /) which was key ingredient in enabling the same sort of "metadata mining" for processing image files I used earlier for book covers. With some rewrites and few added functionalities, I was able to combine two image processing functions into a single one. The beauty of this is that is now much easier to add new functionality as I only have to change one function. Capitalizing on that, I added support for some more metadata fields. There is still some improvements that need to be done in the final output part (most importantly combining two image processing functions there as well). The last remaining major thing to add in image processing is to add support for images with thumbnail and full res images.

Yesterday aside of doing work on info page, I also climbed past few crux issues that stopped me before.

  • Bookcat stores descriptions with multiple paragraphs as text strings where line feed character is used to divide string into chapters. The descriptions is also exported like that, so instead of getting the data neatly paragraphed from Bookcat, I get an element with text string containing empty lines. Unfortunately xml parsers don't much care about white space, so that is shows as continuous string without any paragraphs. There would be some solutions that might be a tad easier to implement, but I was persisting on going the whole nine yards, in this that would be reading input (i.e. Bookcat export file) a paragraph at a time and putting each paragraphs into separate child element under main description element. This makes it easy to create proper paragraphing when transforming to xhtml. The trick to do that is to use recursive template that checks if two consecutive line feeds are found, and process conditionally from there. If they are, use substring-before to create first paragraph, and have the template call itself with substring-after as a input string. If no consecutive line feeds were not found, just create a paragraphs and exit. Things are generally easy when you know what to do.
  • Same approach worked for getting file part name out of path as well (split by /) which was key ingredient in enabling the same sort of "metadata mining" for processing image files I used earlier for book covers. With some rewrites and few added functionalities, I was able to combine two image processing functions into a single one. The beauty of this is that is now much easier to add new functionality as I only have to change one function. Capitalizing on that, I added support for some more metadata fields. There is still some improvements that need to be done in the final output part (most importantly combining two image processing functions there as well). The last remaining major thing to add in image processing is to add support for images with thumbnail and full res images.
Yesterday aside of doing work on info page, I also climbed past few crux issues that stopped me before.
  • Bookcat stores descriptions with multiple paragraphs as text strings where line feed character is used to divide string into chapters. The descriptions is also exported like that, so instead of getting the data neatly paragraphed from Bookcat, I get an element with text string containing empty lines. Unfortunately xml parsers don't much care about white space, so that is shows as continuous string without any paragraphs. There would be some solutions that might be a tad easier to implement, but I was persisting on going the whole nine yards, in this that would be reading input (i.e. Bookcat export file) a paragraph at a time and putting each paragraphs into separate child element under main description element. This makes it easy to create proper paragraphing when transforming to xhtml. The trick to do that is to use recursive template that checks if two consecutive line feeds are found, and process conditionally from there. If they are, use substring-before to create first paragraph, and have the template call itself with substring-after as a input string. If no consecutive line feeds were not found, just create a paragraphs and exit. Things are generally easy when you know what to do.
  • Same approach worked for getting file part name out of path as well (split by /) which was key ingredient in enabling the same sort of "metadata mining" for processing image files I used earlier for book covers. With some rewrites and few added functionalities, I was able to combine two image processing functions into a single one. The beauty of this is that is now much easier to add new functionality as I only have to change one function. Capitalizing on that, I added support for some more metadata fields. There is still some improvements that need to be done in the final output part (most importantly combining two image processing functions there as well). The last remaining major thing to add in image processing is to add support for images with thumbnail and full res images.

Yesterday aside of doing work on info page, I also climbed past few crux issues that stopped me before.

  • Bookcat stores descriptions with multiple paragraphs as text strings where line feed character is used to divide string into chapters. The descriptions is also exported like that, so instead of getting the data neatly paragraphed from Bookcat, I get an element with text string containing empty lines. Unfortunately xml parsers don't much care about white space, so that is shows as continuous string without any paragraphs. There would be some solutions that might be a tad easier to implement, but I was persisting on going the whole nine yards, in this that would be reading input (i.e. Bookcat export file) a paragraph at a time and putting each paragraphs into separate child element under main description element. This makes it easy to create proper paragraphing when transforming to xhtml. The trick to do that is to use recursive template that checks if two consecutive line feeds are found, and process conditionally from there. If they are, use substring-before to create first paragraph, and have the template call itself with substring-after as a input string. If no consecutive line feeds were not found, just create a paragraphs and exit. Things are generally easy when you know what to do.
  • Same approach worked for getting file part name out of path as well (split by /) which was key ingredient in enabling the same sort of "metadata mining" for processing image files I used earlier for book covers. With some rewrites and few added functionalities, I was able to combine two image processing functions into a single one. The beauty of this is that is now much easier to add new functionality as I only have to change one function. Capitalizing on that, I added support for some more metadata fields. There is still some improvements that need to be done in the final output part (most importantly combining two image processing functions there as well). The last remaining major thing to add in image processing is to add support for images with thumbnail and full res images.
Yesterday aside of doing work on info page, I also climbed past few crux issues that stopped me before.
  • Bookcat stores descriptions with multiple paragraphs as text strings where line feed character is used to divide string into chapters. The descriptions is also exported like that, so instead of getting the data neatly paragraphed from Bookcat, I get an element with text string containing empty lines. Unfortunately xml parsers don't much care about white space, so that is shows as continuous string without any paragraphs. There would be some solutions that might be a tad easier to implement, but I was persisting on going the whole nine yards, in this that would be reading input (i.e. Bookcat export file) a paragraph at a time and putting each paragraphs into separate child element under main description element. This makes it easy to create proper paragraphing when transforming to xhtml. The trick to do that is to use recursive template that checks if two consecutive line feeds are found, and process conditionally from there. If they are, use substring-before to create first paragraph, and have the template call itself with substring-after as a input string. If no consecutive line feeds were not found, just create a paragraphs and exit. Things are generally easy when you know what to do.
  • Same approach worked for getting file part name out of path as well (split by /) which was key ingredient in enabling the same sort of "metadata mining" for processing image files I used earlier for book covers. With some rewrites and few added functionalities, I was able to combine two image processing functions into a single one. The beauty of this is that is now much easier to add new functionality as I only have to change one function. Capitalizing on that, I added support for some more metadata fields. There is still some improvements that need to be done in the final output part (most importantly combining two image processing functions there as well). The last remaining major thing to add in image processing is to add support for images with thumbnail and full res images.

Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed.

Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed.

Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed.

Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed.

Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed.

Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed.

Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed.

Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed.

Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed. Pretty complete rework of info page. Most of the information is the same, but organization is improved and some more information is added. Also some of the dead links have been removed.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps ABC goes Ajax. I added nice javascript code tablesort.js which uses Ajax technology to turn static tables into sortable ones. While I was at it, I also tweaked table css a bit to produce nicer looking table. Also, I couldn't resist adding title attribute to grade link to show link description as tooltip. All this is currently in action on 4000m peaks in the Alps.

I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:

  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags.

I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:

  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags.

I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:

  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags.

I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:

  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags.

I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:

  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags.

I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:

  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags.

I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:

  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags.

I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:

  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags.

I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:

  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags. I seem to have been in very productive mood regarding to coding of the website during last few days. Today I solved the crux in regards of using Bookcat to manage book information, namely inserting the images. The solution involves:
  1. Import book images stored by Bookcat into image cataloguing software, that is able to export image dimensions (exif data) into xml file together with file name.
  2. Use xslt file to match cover image filename (which is part of the book details exported by Bookcat) against image xml-file using file name as a key.
  3. Copy over the image details as needed.
  4. Since Bookcat does not limit the cover sizes in any way (and those vary a great deal), I wrote few lines of extra code in xsl to "scale" the image dimensions. Well, it does not really scale a thing, it just compares image dimensions against defined max. width and height, then sets calculates new values for width and height attributes so that the image does not exceed the set max values and original image aspect ratio is retained.
This very same technique works like a charm for reading other image properties as well, provided they are included in xml export file. Yes, that includes iptc and xmp data provided those are exported by the image cataloguing application of your choice. From now on there will be no more image captions, copyright information or the like written directly to documents, instead things like that will be written directly to images using standard tags.